Enactivism and predictive processing: a non-representational view

This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback-Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Ba...

Full description

Saved in:
Bibliographic Details
Published inPhilosophical explorations Vol. 21; no. 2; pp. 264 - 281
Main Authors Kirchhoff, Michael D., Robertson, Ian
Format Journal Article
LanguageEnglish
Published Abingdon Routledge 04.05.2018
Taylor & Francis Ltd
Subjects
Online AccessGet full text
ISSN1386-9795
1741-5918
DOI10.1080/13869795.2018.1477983

Cover

More Information
Summary:This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback-Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative entropy, which can be shown to be the same as covariance (through a set of additional steps). It is well known that facts about covariance do not entail facts about representational content. So there is no reason to think that the KL-divergence is a measure of (mis-)representational content. This paper thus provides an enactive, non-representational account of Bayesian belief optimisation in hierarchical PP.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1386-9795
1741-5918
DOI:10.1080/13869795.2018.1477983