Information‐theoretical Complexity Metrics

Information‐theoretical complexity metrics are auxiliary hypotheses that link theories of parsing and grammar to potentially observable measurements such as reading times and neural signals. This review article considers two such metrics, Surprisal and Entropy Reduction, which are respectively built...

Full description

Saved in:
Bibliographic Details
Published inLanguage and linguistics compass Vol. 10; no. 9; pp. 397 - 412
Main Author Hale, John
Format Journal Article
LanguageEnglish
Published 01.09.2016
Online AccessGet full text
ISSN1749-818X
1749-818X
DOI10.1111/lnc3.12196

Cover

More Information
Summary:Information‐theoretical complexity metrics are auxiliary hypotheses that link theories of parsing and grammar to potentially observable measurements such as reading times and neural signals. This review article considers two such metrics, Surprisal and Entropy Reduction, which are respectively built upon the two most natural notions of ‘information value’ for an observed event (Blachman ). This review sketches their conceptual background and touches on their relationship to other theories in cognitive science. It characterizes them as ‘lenses’ through which theorists ‘see’ the information‐processing consequences of linguistic grammars. While these metrics are not themselves parsing algorithms, the review identifies candidate mechanisms that have been proposed for both of them.
ISSN:1749-818X
1749-818X
DOI:10.1111/lnc3.12196