Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network
AffiliationUniv Arizona, Dept Elect & Comp Engn
MetadataShow full item record
CitationRichter, E., Valancius, S., McClanahan, J. et al. J Supercomput (2018) 74: 3211. https://doi.org/10.1007/s11227-018-2374-x
JournalJOURNAL OF SUPERCOMPUTING
Rights© Springer Science+Business Media, LLC, part of Springer Nature 2018
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at firstname.lastname@example.org.
AbstractArtificial neural networks (ANNs) have become a popular means of solving complex problems in prediction-based applications such as image and natural language processing. Two challenges prominent in the neural network domain are the practicality of hardware implementation and dynamically training the network. In this study, we address these challenges with a development methodology that balances the hardware footprint and the quality of the ANN. We use the well-known perceptron-based branch prediction problem as a case study for demonstrating this methodology. This problem is perfect to analyze dynamic hardware implementations of ANNs because it exists in hardware and trains dynamically. Using our hierarchical configuration search space exploration, we show that we can decrease the memory footprint of a standard perceptron-based branch predictor by 2.3 with only a 0.6% decrease in prediction accuracy.
Note12 month embargo; published online: 16 April 2018
VersionFinal accepted manuscript
SponsorsRaytheon Missile Systems [2017-UNI-0008]