AffiliationUniv Arizona, Dept Psychol
Univ Arizona, Cognit Sci Program
MetadataShow full item record
PublisherNATURE PUBLISHING GROUP
CitationWilson, R.C., Shenhav, A., Straccia, M. et al. The Eighty Five Percent Rule for optimal learning. Nat Commun 10, 4646 (2019) doi:10.1038/s41467-019-12552-4
RightsCopyright © The Author(s) 2019. Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/ licenses/by/4.0/.
Collection InformationThis item from the UA Faculty Publications collection is made available by the University of Arizona with support from the University of Arizona Libraries. If you have questions, please contact us at email@example.com.
AbstractResearchers and educators have long wrestled with the question of how best to teach their clients be they humans, non-human animals or machines. Here, we examine the role of a single variable, the difficulty of training, on the rate of learning. In many situations we find that there is a sweet spot in which training is neither too easy nor too hard, and where learning progresses most quickly. We derive conditions for this sweet spot for a broad class of learning algorithms in the context of binary classification tasks. For all of these stochastic gradient-descent based learning algorithms, we find that the optimal error rate for training is around 15.87% or, conversely, that the optimal training accuracy is about 85%. We demonstrate the efficacy of this ‘Eighty Five Percent Rule’ for artificial neural networks used in AI and biologically plausible neural networks thought to describe animal learning.
NoteOpen access journal
VersionFinal published version
SponsorsJohn Templeton Foundation; Center of Biomedical Research Excellence grant from National Institute of General Medical Sciences [P20GM103645]; United States Department of Health & Human Services National Institutes of Health (NIH) - National Institute on Aging (NIA) [R56 AG061888]