Mechanisms for handling nested dependencies in neural-network language models and humans

Recursive processing in sentence comprehension is considered a hallmark of human linguistic abilities. However, its underlying neural mechanisms remain largely unknown. We studied whether a modern artificial neural network trained with “deep learning” methods mimics a central aspect of human sentenc...

Full description

Saved in:
Bibliographic Details
Published inCognition Vol. 213; p. 104699
Main Authors Lakretz, Yair, Hupkes, Dieuwke, Vergallito, Alessandra, Marelli, Marco, Baroni, Marco, Dehaene, Stanislas
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 01.08.2021
Elsevier Science Ltd
Elsevier
Subjects
Online AccessGet full text
ISSN0010-0277
1873-7838
1873-7838
DOI10.1016/j.cognition.2021.104699

Cover

More Information
Summary:Recursive processing in sentence comprehension is considered a hallmark of human linguistic abilities. However, its underlying neural mechanisms remain largely unknown. We studied whether a modern artificial neural network trained with “deep learning” methods mimics a central aspect of human sentence processing, namely the storing of grammatical number and gender information in working memory and its use in long-distance agreement (e.g., capturing the correct number agreement between subject and verb when they are separated by other phrases). Although the network, a recurrent architecture with Long Short-Term Memory units, was solely trained to predict the next word in a large corpus, analysis showed the emergence of a very sparse set of specialized units that successfully handled local and long-distance syntactic agreement for grammatical number. However, the simulations also showed that this mechanism does not support full recursion and fails with some long-range embedded dependencies. We tested the model's predictions in a behavioral experiment where humans detected violations in number agreement in sentences with systematic variations in the singular/plural status of multiple nouns, with or without embedding. Human and model error patterns were remarkably similar, showing that the model echoes various effects observed in human data. However, a key difference was that, with embedded long-range dependencies, humans remained above chance level, while the model's systematic errors brought it below chance. Overall, our study shows that exploring the ways in which modern artificial neural networks process sentences leads to precise and testable hypotheses about human linguistic performance. •A specialized mechanism for grammatical agreement emerges in Neural Language Models (NLMs).•The mechanism consistently emerges for different grammatical features and various languages.•Agreement performance of the NLM was found to be worse on the innermost dependency of nested grammatical structures.•Model prediction was confirmed in humans. Humans too make more agreement errors on inner dependencies.•Exploring how modern NLMs process sentences leads to precise and testable hypotheses about human linguistic performance.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0010-0277
1873-7838
1873-7838
DOI:10.1016/j.cognition.2021.104699