Extreme learning machine: Theory and applications
It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural net...
Saved in:
| Published in | Neurocomputing (Amsterdam) Vol. 70; no. 1; pp. 489 - 501 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier B.V
01.12.2006
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0925-2312 1872-8286 |
| DOI | 10.1016/j.neucom.2005.12.126 |
Cover
| Summary: | It is clear that the learning speed of feedforward neural networks is in general far slower than required and it has been a major bottleneck in their applications for past decades. Two key reasons behind may be: (1) the slow gradient-based learning algorithms are extensively used to train neural networks, and (2) all the parameters of the networks are tuned iteratively by using such learning algorithms. Unlike these conventional implementations, this paper proposes a new learning algorithm called
extreme
learning
machine (ELM) for
single-hidden
layer
feedforward neural
networks (SLFNs) which randomly chooses hidden nodes and analytically determines the output weights of SLFNs. In theory, this algorithm tends to provide good generalization performance at extremely fast learning speed. The experimental results based on a few artificial and real benchmark function approximation and classification problems including very large complex applications show that the new algorithm can produce good generalization performance in most cases and can learn thousands of times faster than conventional popular learning algorithms for feedforward neural networks.
1
1
For the preliminary idea of the ELM algorithm, refer to “Extreme Learning Machine: A New Learning Scheme of Feedforward Neural Networks”, Proceedings of International Joint Conference on Neural Networks (IJCNN2004), Budapest, Hungary, 25–29 July, 2004. |
|---|---|
| ISSN: | 0925-2312 1872-8286 |
| DOI: | 10.1016/j.neucom.2005.12.126 |