Training Many-to-Many Recurrent Neural Networks with Target Propagation
Deep neural networks trained with back-propagation have been the driving force for the progress in fields such as computer vision, natural language processing. However, back-propagation has often been criticized for its biological implausibility. More biologically plausible alternatives to backpropa...
        Saved in:
      
    
          | Published in | Artificial Neural Networks and Machine Learning - ICANN 2021 Vol. 12894; pp. 433 - 443 | 
|---|---|
| Main Authors | , | 
| Format | Book Chapter | 
| Language | English | 
| Published | 
        Switzerland
          Springer International Publishing AG
    
        2021
     Springer International Publishing  | 
| Series | Lecture Notes in Computer Science | 
| Subjects | |
| Online Access | Get full text | 
| ISBN | 9783030863791 3030863794  | 
| ISSN | 0302-9743 1611-3349  | 
| DOI | 10.1007/978-3-030-86380-7_35 | 
Cover
| Summary: | Deep neural networks trained with back-propagation have been the driving force for the progress in fields such as computer vision, natural language processing. However, back-propagation has often been criticized for its biological implausibility. More biologically plausible alternatives to backpropagation such as target propagation and feedback alignment have been proposed. But most of these learning algorithms are originally designed and tested for feedforward networks, and their ability for training recurrent networks and arbitrary computation graphs is not fully studied nor understood. In this paper, we propose a learning procedure based on target propagation for training multi-output recurrent networks. It opens doors to extending such biologically plausible models as general learning algorithms for arbitrary graphs. | 
|---|---|
| ISBN: | 9783030863791 3030863794  | 
| ISSN: | 0302-9743 1611-3349  | 
| DOI: | 10.1007/978-3-030-86380-7_35 |