An optimized second order stochastic learning algorithm for neural network training

This paper proposes an improved stochastic second order learning algorithm for supervised neural network training. The proposed algorithm, named bounded stochastic diagonal Levenberg–Marquardt (B-SDLM), utilizes both gradient and curvature information to achieve fast convergence while requiring only...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 186; pp. 74 - 89
Main Authors Liew, Shan Sung, Khalil-Hani, Mohamed, Bakhteri, Rabia
Format Journal Article
LanguageEnglish
Published Elsevier B.V 19.04.2016
Subjects
Online AccessGet full text
ISSN0925-2312
1872-8286
DOI10.1016/j.neucom.2015.12.076

Cover

More Information
Summary:This paper proposes an improved stochastic second order learning algorithm for supervised neural network training. The proposed algorithm, named bounded stochastic diagonal Levenberg–Marquardt (B-SDLM), utilizes both gradient and curvature information to achieve fast convergence while requiring only minimal computational overhead than the stochastic gradient descent (SGD) method. B-SDLM has only a single hyperparameter as opposed to most other learning algorithms that suffer from the hyperparameter overfitting problem due to having more hyperparameters to be tuned. Experiments using the multilayer perceptron (MLP) and convolutional neural network (CNN) models have shown that B-SDLM outperforms other learning algorithms with regard to the classification accuracies and computational efficiency (about 5.3% faster than SGD on the mnist-rot-bg-img database). It can classify all testing samples correctly on the face recognition case study based on AR Purdue database. In addition, experiments on handwritten digit classification case studies show that significant improvements of 19.6% on MNIST database and 17.5% on mnist-rot-bg-img database can be achieved in terms of the testing misclassification error rates (MCRs). The computationally expensive Hessian calculations are kept to a minimum by using just 0.05% of the training samples in its estimation or updating the learning rates once per two training epochs, while maintaining or even achieving lower testing MCRs. It is also shown that B-SDLM works well in the mini-batch learning mode, and we are able to achieve 3.32× performance speedup when deploying the proposed algorithm in a distributed learning environment with a quad-core processor.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0925-2312
1872-8286
DOI:10.1016/j.neucom.2015.12.076