Publisher
The University of Arizona.Rights
Copyright © is held by the author. Digital access to this material is made possible by the University Libraries, University of Arizona. Further transmission, reproduction, presentation (such as public display or performance) of protected items is prohibited except with permission of the author.Abstract
As the size of neural networks increase, the resources needed to support their execution also increase. This presents a barrier for creating neural networks that can be trained and executed within resource limited embedded systems. To reduce the resources needed to execute neural networks, weight reduction is often the first target. A network that has been significantly pruned can be executed on-chip, that is, in low SWaP hardware. But, this does not enable either training or pruning in embedded hardware which first requires a full-sized network to fit within the restricted resources. We introduce two methods of network reduction that allows neural networks to be grown and trained within edge devices, Artificial Neurogenesis and Synaptic Input Consolidation.Type
Electronic Dissertationtext
Degree Name
Ph.D.Degree Level
doctoralDegree Program
Graduate CollegeElectrical & Computer Engineering