Show simple item record

dc.contributor.advisorGlickenstein, David
dc.contributor.authorBell, Brian Wesley
dc.creatorBell, Brian Wesley
dc.date.accessioned2023-12-20T05:09:31Z
dc.date.available2023-12-20T05:09:31Z
dc.date.issued2023
dc.identifier.citationBell, Brian Wesley. (2023). A Geometric Framework for Adversarial Vulnerability in Machine Learning (Doctoral dissertation, University of Arizona, Tucson, USA).
dc.identifier.urihttp://hdl.handle.net/10150/670353
dc.description.abstractThis work starts with the intention of using mathematics to understand the intriguing vulnerability observed by Szegedy et al. (2014) within artificial neural networks. Along the way, we will develop some novel tools with applications far outside of just the adversarial domain. We will do this while developing a rigorous mathematical framework to examine this problem. Our goal is to build out theory which can support increasingly sophisticated conjecture about adversarial attacks with a particular focus on the so called “Dimpled Manifold Hypothesis” by Shamir, Melamed, and BenShmuel (2021). Chapter one will cover the history and architecture of neural network architectures. Chapter two is focused on the background of adversarial vulnerability. Starting from the seminal paper by Szegedy et al. (2014) we will develop the theory of adversarial perturbation and attack.Chapter three will build a theory of persistence that is related to Ricci Curvature, which can be used to measure properties of decision boundaries. We will use this foundation to make a conjecture relating adversarial attacks. Chapters four and five represent a sudden and wonderful digression that examines an intriguing related body of theory for spatial analysis of neural networks as approximations of kernel machines and becomes a novel theory for representing neural networks with bilinear maps. These heavily mathematical chapters will set up a framework and begin exploring applications of what may become a very important theoretical foundation for analyzing neural network learning with spatial and geometric information. We will conclude by setting up our new methods to address the conjecture from chapter 3 in continuing research.
dc.language.isoen
dc.publisherThe University of Arizona.
dc.rightsCopyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.
dc.rights.urihttp://rightsstatements.org/vocab/InC/1.0/
dc.subjectAdversarial Attacks
dc.subjectGeometry
dc.subjectNeural Networks
dc.subjectNeural Tangent Kernel
dc.subjectRobstness
dc.titleA Geometric Framework for Adversarial Vulnerability in Machine Learning
dc.typeElectronic Dissertation
dc.typetext
thesis.degree.grantorUniversity of Arizona
thesis.degree.leveldoctoral
dc.contributor.committeememberLin, Kevin
dc.contributor.committeememberGupta, Hoshin
dc.contributor.committeememberRychlik, Marek R.
thesis.degree.disciplineGraduate College
thesis.degree.disciplineMathematics
thesis.degree.namePh.D.
refterms.dateFOA2023-12-20T05:09:31Z


Files in this item

Thumbnail
Name:
azu_etd_21052_sip1_m.pdf
Size:
58.01Mb
Format:
PDF

This item appears in the following Collection(s)

Show simple item record