Human Category Learning: Implications for Backpropagation Models

Backpropagation (Rumelhart et al., 1986) was proposed as a general learning algorithm for multi-layer perceptrons. This article demonstrates that a standard version of backprop fails to attend selectively to input dimensions in the same way as humans, suffers catastrophic forgetting of previously le...

Full description

Saved in:
Bibliographic Details
Published inConnection science Vol. 5; no. 1; pp. 3 - 36
Main Author KRUSCHKE, JOHN K.
Format Journal Article
LanguageEnglish
Published London Taylor & Francis Group 01.01.1993
Taylor & Francis
Subjects
Online AccessGet full text
ISSN0954-0091
1360-0494
DOI10.1080/09540099308915683

Cover

More Information
Summary:Backpropagation (Rumelhart et al., 1986) was proposed as a general learning algorithm for multi-layer perceptrons. This article demonstrates that a standard version of backprop fails to attend selectively to input dimensions in the same way as humans, suffers catastrophic forgetting of previously learned associations when novel exemplars are trained, and can be overly sensitive to linear category boundaries. Another connectionist model, ALCOVE (Kruschke 1990, 1992), does not suffer those failures. Previous researchers identified these problems; the present article reports quantitative fits of the models to new human learning data. ALCOVE can be functionally approximated by a network that uses linear-sigmoid hidden nodes, like standard backprop. It is argued that models of human category learning should incorporate quasi-local representations and dimensional attention learning, as well as error-driven learning, to address simultaneously all three phenomena.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ISSN:0954-0091
1360-0494
DOI:10.1080/09540099308915683