Syntactic Priming by L2 LSTM Language Models

Neural(-network) language models (LMs) have recently been successful in performing the tasks that require sensitivity to syntactic structure. We provide further evidence for their sensitivity to syntactic structure by showing that compared to adding a non-adaptive counterpart to it, adding an adapta...

Full description

Saved in:
Bibliographic Details
Published in언어연구, 37(4) pp. 475 - 489
Main Authors 최선주, 박명관
Format Journal Article
LanguageEnglish
Published 한국현대언어학회 01.02.2022
Subjects
Online AccessGet full text
ISSN1225-4770
2671-6151
DOI10.18627/jslg.37.4.202202.475

Cover

More Information
Summary:Neural(-network) language models (LMs) have recently been successful in performing the tasks that require sensitivity to syntactic structure. We provide further evidence for their sensitivity to syntactic structure by showing that compared to adding a non-adaptive counterpart to it, adding an adaptation-as-priming paradigm to L2 LSTM LMs improves their ability to track abstract structure. By applying a gradient similarity metric between structures, this mechanism allows us to reconstruct the organization of the L2 LMs’ syntactic representational space. In so doing, we discover that sentences with a particular type of relative clauses behave in a similar fashion to other sentences with the same type of relative clauses in the L2 LMs’ representation space, in keeping with the recent studies of L1 LM adaptation. We also demonstrate that the similarity between given sentences is not affected by specific words in sentences. Our results show that the L2 LMs have the ability to track abstract structural properties of sentences, just as L1 LMs do. KCI Citation Count: 0
ISSN:1225-4770
2671-6151
DOI:10.18627/jslg.37.4.202202.475