A Training Algorithm for Locally Recurrent Neural Networks Based on the Explicit Gradient of the Loss Function

In this paper, a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time guarantee the stability of the network during the training. The main feature of the proposed algorithm is the capability to re...

Full description

Saved in:
Bibliographic Details
Published inAlgorithms Vol. 18; no. 2; p. 104
Main Authors Carcangiu, Sara, Montisci, Augusto
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 01.02.2025
Subjects
Online AccessGet full text
ISSN1999-4893
1999-4893
DOI10.3390/a18020104

Cover

More Information
Summary:In this paper, a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time guarantee the stability of the network during the training. The main feature of the proposed algorithm is the capability to represent the gradient of the error in an explicit form. The algorithm builds on the interpretation of Fibonacci’s sequence as the output of an IIR second-order filter, which makes it possible to use Binet’s formula that allows the generic terms of the sequence to be calculated directly. Thanks to this approach, the gradient of the loss function during the training can be explicitly calculated, and it can be expressed in terms of the parameters, which control the stability of the neural network.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1999-4893
1999-4893
DOI:10.3390/a18020104