A Training Algorithm for Locally Recurrent Neural Networks Based on the Explicit Gradient of the Loss Function
In this paper, a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time guarantee the stability of the network during the training. The main feature of the proposed algorithm is the capability to re...
        Saved in:
      
    
          | Published in | Algorithms Vol. 18; no. 2; p. 104 | 
|---|---|
| Main Authors | , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        Basel
          MDPI AG
    
        01.02.2025
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1999-4893 1999-4893  | 
| DOI | 10.3390/a18020104 | 
Cover
| Summary: | In this paper, a new algorithm for the training of Locally Recurrent Neural Networks (LRNNs) is presented, which aims to reduce computational complexity and at the same time guarantee the stability of the network during the training. The main feature of the proposed algorithm is the capability to represent the gradient of the error in an explicit form. The algorithm builds on the interpretation of Fibonacci’s sequence as the output of an IIR second-order filter, which makes it possible to use Binet’s formula that allows the generic terms of the sequence to be calculated directly. Thanks to this approach, the gradient of the loss function during the training can be explicitly calculated, and it can be expressed in terms of the parameters, which control the stability of the neural network. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14  | 
| ISSN: | 1999-4893 1999-4893  | 
| DOI: | 10.3390/a18020104 |