On improving the conditioning of extreme learning machine: A linear case
Recently Extreme Learning Machine (ELM) has been attracting attentions for its simple and fast training algorithm, which randomly selects input weights. Given sufficient hidden neurons, ELM has a comparable performance for a wide range of regression and classification problems. However, in this pape...
Saved in:
Published in | 2009 7th International Conference on Information, Communications and Signal Processing pp. 1 - 5 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.12.2009
|
Subjects | |
Online Access | Get full text |
ISBN | 9781424446568 1424446562 |
DOI | 10.1109/ICICS.2009.5397617 |
Cover
Summary: | Recently Extreme Learning Machine (ELM) has been attracting attentions for its simple and fast training algorithm, which randomly selects input weights. Given sufficient hidden neurons, ELM has a comparable performance for a wide range of regression and classification problems. However, in this paper we argue that random input weight selection may lead to an ill-conditioned problem, for which solutions will be numerically unstable. In order to improve the conditioning of ELM, we propose an input weight selection algorithm for an ELM with linear hidden neurons. Experiment results show that by applying the proposed algorithm accuracy is maintained while condition is perfectly stable. |
---|---|
ISBN: | 9781424446568 1424446562 |
DOI: | 10.1109/ICICS.2009.5397617 |