Implementing online natural gradient learning: problems and solutions

The online natural gradient learning is an efficient algorithm to resolve the slow learning speed and poor performance of the standard gradient descent method. However, there are several problems to implement this algorithm. In this paper, we proposed a new algorithm to solve these problems and then...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on neural networks Vol. 17; no. 2; pp. 317 - 329
Main Author Wan, W.
Format Journal Article
LanguageEnglish
Published New York, NY IEEE 01.03.2006
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text
ISSN1045-9227
DOI10.1109/TNN.2005.863406

Cover

More Information
Summary:The online natural gradient learning is an efficient algorithm to resolve the slow learning speed and poor performance of the standard gradient descent method. However, there are several problems to implement this algorithm. In this paper, we proposed a new algorithm to solve these problems and then compared the new algorithm with other known algorithms for online learning, including Almeida-Langlois-Amaral-Plakhov algorithm (ALAP), Vario-/spl eta/, local adaptive learning rate and learning with momentum etc., using sample data sets from Proben1 and normalized handwritten digits, automatically scanned from envelopes by the U.S. Postal Services. The strong and weak points of these algorithms were analyzed and tested empirically. We found out that using the online training error as the criterion to determine whether the learning rate should be changed or not is not appropriate and our new algorithm has better performance than other existing online algorithms.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Undefined-1
ObjectType-Feature-3
ISSN:1045-9227
DOI:10.1109/TNN.2005.863406