Gauss-Newton approximation to Bayesian learning

This paper describes the application of Bayesian regularization to the training of feedforward neural networks. A Gauss-Newton approximation to the Hessian matrix, which can be conveniently implemented within the framework of the Levenberg-Marquardt algorithm, is used to reduce the computational ove...

Full description

Saved in:
Bibliographic Details
Published in1997 IEEE International Conference on Neural Networks Vol. 3; pp. 1930 - 1935 vol.3
Main Authors Dan Foresee, F., Hagan, M.T.
Format Conference Proceeding
LanguageEnglish
Japanese
Published IEEE 1997
Subjects
Online AccessGet full text
ISBN0780341228
9780780341227
DOI10.1109/ICNN.1997.614194

Cover

More Information
Summary:This paper describes the application of Bayesian regularization to the training of feedforward neural networks. A Gauss-Newton approximation to the Hessian matrix, which can be conveniently implemented within the framework of the Levenberg-Marquardt algorithm, is used to reduce the computational overhead. The resulting algorithm is demonstrated on a simple test problem and is then applied to three practical problems. The results demonstrate that the algorithm produces networks which have excellent generalization capabilities.
ISBN:0780341228
9780780341227
DOI:10.1109/ICNN.1997.614194