LANN-SVD: A Non-Iterative SVD-Based Learning Algorithm for One-Layer Neural Networks
In the scope of data analytics, the volume of a data set can be defined as a product of instance size and dimensionality of the data. In many real problems, data sets are mainly large only on one of these aspects. Machine learning methods proposed in the literature are able to efficiently learn in o...
Saved in:
| Published in | IEEE transaction on neural networks and learning systems Vol. 29; no. 8; pp. 3900 - 3905 |
|---|---|
| Main Authors | , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.08.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 2162-237X 2162-2388 2162-2388 |
| DOI | 10.1109/TNNLS.2017.2738118 |
Cover
| Summary: | In the scope of data analytics, the volume of a data set can be defined as a product of instance size and dimensionality of the data. In many real problems, data sets are mainly large only on one of these aspects. Machine learning methods proposed in the literature are able to efficiently learn in only one of these two situations, when the number of variables is much greater than instances or vice versa. However, there is no proposal allowing to efficiently handle either circumstances in a large-scale scenario. In this brief, we present an approach to integrally address both situations, large dimensionality or large instance size, by using a singular value decomposition (SVD) within a learning algorithm for one-layer feedforward neural network. As a result, a noniterative solution is obtained, where the weights can be calculated in a closed-form manner, thereby avoiding low convergence rate and also hyperparameter tuning. The proposed learning method, LANN-SVD in short, presents a good computational efficiency for large-scale data analytic. Comprehensive comparisons were conducted to assess LANN-SVD against other state-of-the-art algorithms. The results of this brief exhibited the superior efficiency of the proposed method in any circumstance. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ISSN: | 2162-237X 2162-2388 2162-2388 |
| DOI: | 10.1109/TNNLS.2017.2738118 |