Training Many-to-Many Recurrent Neural Networks with Target Propagation

Deep neural networks trained with back-propagation have been the driving force for the progress in fields such as computer vision, natural language processing. However, back-propagation has often been criticized for its biological implausibility. More biologically plausible alternatives to backpropa...

Full description

Saved in:
Bibliographic Details
Published inArtificial Neural Networks and Machine Learning - ICANN 2021 Vol. 12894; pp. 433 - 443
Main Authors Dai, Peilun, Chin, Sang
Format Book Chapter
LanguageEnglish
Published Switzerland Springer International Publishing AG 2021
Springer International Publishing
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN9783030863791
3030863794
ISSN0302-9743
1611-3349
DOI10.1007/978-3-030-86380-7_35

Cover

More Information
Summary:Deep neural networks trained with back-propagation have been the driving force for the progress in fields such as computer vision, natural language processing. However, back-propagation has often been criticized for its biological implausibility. More biologically plausible alternatives to backpropagation such as target propagation and feedback alignment have been proposed. But most of these learning algorithms are originally designed and tested for feedforward networks, and their ability for training recurrent networks and arbitrary computation graphs is not fully studied nor understood. In this paper, we propose a learning procedure based on target propagation for training multi-output recurrent networks. It opens doors to extending such biologically plausible models as general learning algorithms for arbitrary graphs.
ISBN:9783030863791
3030863794
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-030-86380-7_35