Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction or presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
Most of statistical machine learning relies on deep neural nets, whose underlying theory and mathematical properties are currently not well explored. This thesis focuses on the theory of Perceptrons, a form of shallow neural network mostly explored in Minsky & Papert’s book “Perceptrons: an Introduction to Computational Geometry”. Their analysis relied on the use of groups that describe the global symmetries of what is to be computed. However, all real images have an input space with finite length and height, and it is often the case that machine learning systems have disconnected input spaces. This means that real-world systems must actually rely on a concept of local symmetry, if any at all. This paper explores the implications of having a bounded and possibly disconnected input space, and generalizes the Minsky & Papert result, proving that the implications of the group invariance theorem can be retained by using groupoids instead.Type
textElectronic Thesis
Degree Name
B.A.Degree Level
bachelorsDegree Program
Honors CollegeMathematics