Deep, Big, Simple Neural Nets for Handwritten Digit Recognition
Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and gr...
Saved in:
Published in | Neural computation Vol. 22; no. 12; pp. 3207 - 3220 |
---|---|
Main Authors | , , , |
Format | Journal Article |
Language | English |
Published |
One Rogers Street, Cambridge, MA 02142-1209, USA
MIT Press
01.12.2010
MIT Press Journals, The |
Subjects | |
Online Access | Get full text |
ISSN | 0899-7667 1530-888X 1530-888X |
DOI | 10.1162/NECO_a_00052 |
Cover
Summary: | Good old online backpropagation for plain multilayer perceptrons yields a very low 0.35% error rate on the MNIST handwritten digits benchmark. All we need to achieve this best result so far are many hidden layers, many neurons per layer, numerous deformed training images to avoid overfitting, and graphics cards to greatly speed up learning. |
---|---|
Bibliography: | December, 2010 ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 ObjectType-Article-2 ObjectType-Feature-1 |
ISSN: | 0899-7667 1530-888X 1530-888X |
DOI: | 10.1162/NECO_a_00052 |