A Geometric Framework for Adversarial Vulnerability in Machine Learning
dc.contributor.advisor | Glickenstein, David | |
dc.contributor.author | Bell, Brian Wesley | |
dc.creator | Bell, Brian Wesley | |
dc.date.accessioned | 2023-12-20T05:09:31Z | |
dc.date.available | 2023-12-20T05:09:31Z | |
dc.date.issued | 2023 | |
dc.identifier.citation | Bell, Brian Wesley. (2023). A Geometric Framework for Adversarial Vulnerability in Machine Learning (Doctoral dissertation, University of Arizona, Tucson, USA). | |
dc.identifier.uri | http://hdl.handle.net/10150/670353 | |
dc.description.abstract | This work starts with the intention of using mathematics to understand the intriguing vulnerability observed by Szegedy et al. (2014) within artificial neural networks. Along the way, we will develop some novel tools with applications far outside of just the adversarial domain. We will do this while developing a rigorous mathematical framework to examine this problem. Our goal is to build out theory which can support increasingly sophisticated conjecture about adversarial attacks with a particular focus on the so called “Dimpled Manifold Hypothesis” by Shamir, Melamed, and BenShmuel (2021). Chapter one will cover the history and architecture of neural network architectures. Chapter two is focused on the background of adversarial vulnerability. Starting from the seminal paper by Szegedy et al. (2014) we will develop the theory of adversarial perturbation and attack.Chapter three will build a theory of persistence that is related to Ricci Curvature, which can be used to measure properties of decision boundaries. We will use this foundation to make a conjecture relating adversarial attacks. Chapters four and five represent a sudden and wonderful digression that examines an intriguing related body of theory for spatial analysis of neural networks as approximations of kernel machines and becomes a novel theory for representing neural networks with bilinear maps. These heavily mathematical chapters will set up a framework and begin exploring applications of what may become a very important theoretical foundation for analyzing neural network learning with spatial and geometric information. We will conclude by setting up our new methods to address the conjecture from chapter 3 in continuing research. | |
dc.language.iso | en | |
dc.publisher | The University of Arizona. | |
dc.rights | Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author. | |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.subject | Adversarial Attacks | |
dc.subject | Geometry | |
dc.subject | Neural Networks | |
dc.subject | Neural Tangent Kernel | |
dc.subject | Robstness | |
dc.title | A Geometric Framework for Adversarial Vulnerability in Machine Learning | |
dc.type | Electronic Dissertation | |
dc.type | text | |
thesis.degree.grantor | University of Arizona | |
thesis.degree.level | doctoral | |
dc.contributor.committeemember | Lin, Kevin | |
dc.contributor.committeemember | Gupta, Hoshin | |
dc.contributor.committeemember | Rychlik, Marek R. | |
thesis.degree.discipline | Graduate College | |
thesis.degree.discipline | Mathematics | |
thesis.degree.name | Ph.D. | |
refterms.dateFOA | 2023-12-20T05:09:31Z |