Attacks and Defenses on Autonomous Vehicles: From Sensor Perception to Control Area Networks
KeywordsAdversarial Machine Learning
MetadataShow full item record
PublisherThe University of Arizona.
RightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
AbstractAutonomous driving has been a focus in both industry and academia. The autonomous vehicle decision-making pipeline is typically comprised of several modules from perceiving the physical world to interacting with it. The perception module aims to understand the surrounding environment such as obstacles, lanes, traffic signals, etc. This high-level information is passed to the planning module which generates decisions such as acceleration, turning right, etc. These decisions are made also based on a prediction module that estimates the future trajectories of obstacles for precaution purposes. The control module translates the decisions into low-level instructions, transmitted on the control area network (CAN) bus in the form of CAN messages. Finally, the actuation module executes these instructions. Since autonomous vehicles normally operate at high speed and usually carry humans, their safety and security are of great importance. Recently, a number of papers raise the security concerns with various attacks. Since the perception module leverages deep learning models for object detection, traffic sign recognition, etc., it is inherently subject to adversarial examples that are specially crafted to deceive neural networks. More severely, those physically-realizable adversarial examples/patches that can be implemented externally using printed stickers or projectors, can bypass the digital protection of the autonomous systems thus being more practical, more stealthy, and more difficult to defend against. On the other hand, due to the lack of secure authentication, the CAN protocol has been demonstrated to be susceptible to ECU (electronic control unit) impersonation attacks where an attack-compromised ECU can broadcast forged CAN messages in order for dangerous actuation, such as deploying air bags on a highway even under normal driving circumstances. In this dissertation, we study the security of autonomous vehicles from sensor perception to the CAN bus. We propose attacks that nullify a broad category of state-of-the-art (SOTA) defenses, and develop our own defenses that can be generalized to defeat different attack methodologies. In particular, for the perception module we develop a visible light-based, system-aware camera attacks, termed GhostImage that can be realized physically and remotely (Chapter 3). We exploit the ghost effect of the camera system to convey adversarial noise that is not norm-bounded, thus bypassing SOTA adversarial example defenses. To detect perception attacks, we propose to adopt the idea of spatio-temporal consistency, which is demonstrated using two different methods: one is model-based (Chapter 4) for detecting ghost-based camera attacks, and the other is data-driven (Chapter 5) in that we can detect object misclassification attacks effectively and efficiently, meanwhile our algorithm is agnostic to different attack methodologies as well as different object detection and tracking systems. In Chapter 6, we investigate the planning module and enhance the adversarial robustness of the obstacle trajectory prediction. Finally in Chapter 7, to evaluate the control and actuation modules we propose a Hill-climbing-style attack that defeats SOTA CAN busintrusion detection systems that are based on multiple CAN frames.
Degree ProgramGraduate College
Electrical & Computer Engineering