Establishing Echo State Network in Order to Be Used in Online Application

Reservoir computing is an efficient computational framework which provides an appropriate approach for training recurrent neural networks. Echo state network is a simple and new method for reservoir computing models which consists of three input layers, a dynamic reservoir, and an output layer. The...

Full description

Saved in:
Bibliographic Details
Published inOperations Research Forum Vol. 6; no. 3; p. 115
Main Authors Saadat, Javad, Farshad, Mohsen, Eliasi, Hussein, Mehr, Kazem Shokoohi
Format Journal Article
LanguageEnglish
Published Cham Springer International Publishing 01.09.2025
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN2662-2556
2662-2556
DOI10.1007/s43069-025-00514-0

Cover

More Information
Summary:Reservoir computing is an efficient computational framework which provides an appropriate approach for training recurrent neural networks. Echo state network is a simple and new method for reservoir computing models which consists of three input layers, a dynamic reservoir, and an output layer. The weight of connections entered into the reservoir is randomly generated and remains fixed in the training process, so it is possible to use many units in the dynamic reservoir to generate more dynamics. In practice, it can be seen that the performance of some dynamic reservoir units is similar to each other. The similarity of the reservoir units’ performance causes a large eigenvalue spread of the network autocorrelation matrix. Therefore, the convergence speed of the online training algorithm is slowed down or the algorithm does not converge. In this study, using the mutual correlation criterion, similar dynamics are found and one (as a representative) from each group of units with similar functions and other similar units are disconnected from the output layer. In this case, without losing the dynamic diversity of the reservoir, the number of trainable connections is reduced. In addition to reducing the number of calculations, the proposed method reduces the eigenvalue spread of the autocorrelation matrix of the reservoir states. The proposed method simultaneously increases the speed of convergence and the accuracy of echo state network online training. At the end, Mackey–Glass time series prediction is used to show the efficiency of the proposed method.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2662-2556
2662-2556
DOI:10.1007/s43069-025-00514-0