Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network
dc.contributor.author | Richter, Edward | |
dc.contributor.author | Valancius, Spencer | |
dc.contributor.author | McClanahan, Josiah | |
dc.contributor.author | Mixter, John | |
dc.contributor.author | Akoglu, Ali | |
dc.date.accessioned | 2018-08-14T18:12:18Z | |
dc.date.available | 2018-08-14T18:12:18Z | |
dc.date.issued | 2018-07 | |
dc.identifier.citation | Richter, E., Valancius, S., McClanahan, J. et al. J Supercomput (2018) 74: 3211. https://doi.org/10.1007/s11227-018-2374-x | en_US |
dc.identifier.issn | 0920-8542 | |
dc.identifier.issn | 1573-0484 | |
dc.identifier.doi | 10.1007/s11227-018-2374-x | |
dc.identifier.uri | http://hdl.handle.net/10150/628514 | |
dc.description.abstract | Artificial neural networks (ANNs) have become a popular means of solving complex problems in prediction-based applications such as image and natural language processing. Two challenges prominent in the neural network domain are the practicality of hardware implementation and dynamically training the network. In this study, we address these challenges with a development methodology that balances the hardware footprint and the quality of the ANN. We use the well-known perceptron-based branch prediction problem as a case study for demonstrating this methodology. This problem is perfect to analyze dynamic hardware implementations of ANNs because it exists in hardware and trains dynamically. Using our hierarchical configuration search space exploration, we show that we can decrease the memory footprint of a standard perceptron-based branch predictor by 2.3 with only a 0.6% decrease in prediction accuracy. | en_US |
dc.description.sponsorship | Raytheon Missile Systems [2017-UNI-0008] | en_US |
dc.language.iso | en | en_US |
dc.publisher | SPRINGER | en_US |
dc.relation.url | http://link.springer.com/10.1007/s11227-018-2374-x | en_US |
dc.rights | © Springer Science+Business Media, LLC, part of Springer Nature 2018. | en_US |
dc.rights.uri | http://rightsstatements.org/vocab/InC/1.0/ | |
dc.subject | Artificial neural network | en_US |
dc.subject | Branch prediction | en_US |
dc.subject | Perceptron | en_US |
dc.subject | SimpleScalar | en_US |
dc.title | Balancing the learning ability and memory demand of a perceptron-based dynamically trainable neural network | en_US |
dc.type | Article | en_US |
dc.contributor.department | Univ Arizona, Dept Elect & Comp Engn | en_US |
dc.identifier.journal | JOURNAL OF SUPERCOMPUTING | en_US |
dc.description.note | 12 month embargo; published online: 16 April 2018 | en_US |
dc.description.collectioninformation | This item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at repository@u.library.arizona.edu. | en_US |
dc.eprint.version | Final accepted manuscript | en_US |
dc.source.journaltitle | The Journal of Supercomputing | |
dc.source.volume | 74 | |
dc.source.issue | 7 | |
dc.source.beginpage | 3211 | |
dc.source.endpage | 3235 |