We are upgrading the repository! A content freeze is in effect until December 6th, 2024 - no new submissions will be accepted; however, all content already published will remain publicly available. Please reach out to repository@u.library.arizona.edu with your questions, or if you are a UA affiliate who needs to make content available soon. Note that any new user accounts created after September 22, 2024 will need to be recreated by the user in November after our migration is completed.
Attacks and Defenses on Autonomous Vehicles: From Sensor Perception to Control Area Networks
Author
Man, YanmaoIssue Date
2022Keywords
Adversarial Machine LearningAutonomous Driving
Computer Security
Cyber-Physical System
Perception
Advisor
Li, Ming
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Autonomous driving has been a focus in both industry and academia. The autonomous vehicle decision-making pipeline is typically comprised of several modules from perceiving the physical world to interacting with it. The perception module aims to understand the surrounding environment such as obstacles, lanes, traffic signals, etc. This high-level information is passed to the planning module which generates decisions such as acceleration, turning right, etc. These decisions are made also based on a prediction module that estimates the future trajectories of obstacles for precaution purposes. The control module translates the decisions into low-level instructions, transmitted on the control area network (CAN) bus in the form of CAN messages. Finally, the actuation module executes these instructions. Since autonomous vehicles normally operate at high speed and usually carry humans, their safety and security are of great importance. Recently, a number of papers raise the security concerns with various attacks. Since the perception module leverages deep learning models for object detection, traffic sign recognition, etc., it is inherently subject to adversarial examples that are specially crafted to deceive neural networks. More severely, those physically-realizable adversarial examples/patches that can be implemented externally using printed stickers or projectors, can bypass the digital protection of the autonomous systems thus being more practical, more stealthy, and more difficult to defend against. On the other hand, due to the lack of secure authentication, the CAN protocol has been demonstrated to be susceptible to ECU (electronic control unit) impersonation attacks where an attack-compromised ECU can broadcast forged CAN messages in order for dangerous actuation, such as deploying air bags on a highway even under normal driving circumstances. In this dissertation, we study the security of autonomous vehicles from sensor perception to the CAN bus. We propose attacks that nullify a broad category of state-of-the-art (SOTA) defenses, and develop our own defenses that can be generalized to defeat different attack methodologies. In particular, for the perception module we develop a visible light-based, system-aware camera attacks, termed GhostImage that can be realized physically and remotely (Chapter 3). We exploit the ghost effect of the camera system to convey adversarial noise that is not norm-bounded, thus bypassing SOTA adversarial example defenses. To detect perception attacks, we propose to adopt the idea of spatio-temporal consistency, which is demonstrated using two different methods: one is model-based (Chapter 4) for detecting ghost-based camera attacks, and the other is data-driven (Chapter 5) in that we can detect object misclassification attacks effectively and efficiently, meanwhile our algorithm is agnostic to different attack methodologies as well as different object detection and tracking systems. In Chapter 6, we investigate the planning module and enhance the adversarial robustness of the obstacle trajectory prediction. Finally in Chapter 7, to evaluate the control and actuation modules we propose a Hill-climbing-style attack that defeats SOTA CAN busintrusion detection systems that are based on multiple CAN frames.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeElectrical & Computer Engineering