Resilient Machine Learning (rML) Development Environment Against Adversarial Attacks
Author
Yao, LikaiIssue Date
2025Keywords
Adversarial Machine LearningAnonymization
Dynamic Data Driven Application Systems
Moving Target Defense
Resiliency
Resilient Decision Support
Advisor
Hariri, Salim
Metadata
Show full item recordPublisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Machine learning (ML) algorithms have been widely used in many critical automated systems, including Dynamic Data Driven Applications Systems (DDDAS)-financial trading, autonomous vehicles, and intrusion detection systems. However, malicious adversaries have strong interests in manipulating the operations of machine learning algorithms to achieve their objectives. They attempt to maliciously manipulate the model/data either during training or testing. The adversarial ML (AML) users might or might not know the details of the ML models and parameters. Defending against these AML attacks can be successful by following methods such as making the ML model robust, validating and verifying inputs and outputs, and changing the ML architecture. We present a resilient machine learning (rML) development environment that can help build machine learning applications that can withstand AML attacks in both training process and testing process. Different techinques can be used to secure the system, such as asymmetric encryption, and proof of past of data for training and anonymization techniques for testing. It is based on the Resilient DDDAS paradigm, utilizes Moving Target Defense (MTD) theory and can be applied to different scenarios, such as image processing, and Industrial Control Systems. We present several applications using rML architecture and the experimental results show that the rML can tolerate adversarial attacks and achieve high classification accuracy with a small execution time overhead.Type
textElectronic Dissertation
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeElectrical & Computer Engineering
