GhostImage: Remote perception attacks against camera-based image classification systems
dc.contributor.author | Man, Y. | |
dc.contributor.author | Li, M. | |
dc.contributor.author | Gerdes, R. | |
dc.date.accessioned | 2021-07-16T01:37:44Z | |
dc.date.available | 2021-07-16T01:37:44Z | |
dc.date.issued | 2020 | |
dc.identifier.citation | Man, Y., Li, M., & Gerdes, R. (2020). GhostImage: Remote Perception Attacks against Camera-based Image Classification Systems. In 23rd International Symposium on Research in Attacks, Intrusions and Defenses ({RAID} 2020) (pp. 317-332). | |
dc.identifier.isbn | 9781940000000 | |
dc.identifier.uri | http://hdl.handle.net/10150/660577 | |
dc.description.abstract | In vision-based object classification systems imaging sensors perceive the environment and then objects are detected and classified for decision-making purposes; e.g., to maneuver an automated vehicle around an obstacle or to raise an alarm to indicate the presence of an intruder in surveillance settings. In this work we demonstrate how the perception domain can be remotely and unobtrusively exploited to enable an attacker to create spurious objects or alter an existing object. An automated system relying on a detection/classification framework subject to our attack could be made to undertake actions with catastrophic results due to attacker-induced misperception. We focus on camera-based systems and show that it is possible to remotely project adversarial patterns into camera systems by exploiting two common effects in optical imaging systems, viz., lens flare/ghost effects and auto-exposure control. To improve the robustness of the attack to channel effects, we generate optimal patterns by integrating adversarial machine learning techniques with a trained end-to-end channel model. We experimentally demonstrate our attacks using a low-cost projector, on three different image datasets, in indoor and outdoor environments, and with three different cameras. Experimental results show that, depending on the projector-camera distance, attack success rates can reach as high as 100% and under targeted conditions. © 2020 by The USENIX Association. All Rights Reserved. | |
dc.language.iso | en | |
dc.publisher | USENIX Association | |
dc.rights | Copyright © The Author(s). | |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.title | GhostImage: Remote perception attacks against camera-based image classification systems | |
dc.type | Proceedings | |
dc.type | text | |
dc.contributor.department | University of Arizona | |
dc.identifier.journal | RAID 2020 Proceedings - 23rd International Symposium on Research in Attacks, Intrusions and Defenses | |
dc.description.note | Immediate access | |
dc.description.collectioninformation | This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu. | |
dc.eprint.version | Final published version | |
dc.source.journaltitle | RAID 2020 Proceedings - 23rd International Symposium on Research in Attacks, Intrusions and Defenses | |
refterms.dateFOA | 2021-07-16T01:37:44Z |