A voxel-wise encoding model for early visual areas decodes mental images of remembered scenes
Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial freque...
        Saved in:
      
    
          | Published in | NeuroImage (Orlando, Fla.) Vol. 105; pp. 215 - 228 | 
|---|---|
| Main Authors | , , , , | 
| Format | Journal Article | 
| Language | English | 
| Published | 
        United States
          Elsevier Inc
    
        15.01.2015
     Elsevier Limited  | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 1053-8119 1095-9572 1095-9572  | 
| DOI | 10.1016/j.neuroimage.2014.10.018 | 
Cover
| Summary: | Recent multi-voxel pattern classification (MVPC) studies have shown that in early visual cortex patterns of brain activity generated during mental imagery are similar to patterns of activity generated during perception. This finding implies that low-level visual features (e.g., space, spatial frequency, and orientation) are encoded during mental imagery. However, the specific hypothesis that low-level visual features are encoded during mental imagery is difficult to directly test using MVPC. The difficulty is especially acute when considering the representation of complex, multi-object scenes that can evoke multiple sources of variation that are distinct from low-level visual features. Therefore, we used a voxel-wise modeling and decoding approach to directly test the hypothesis that low-level visual features are encoded in activity generated during mental imagery of complex scenes. Using fMRI measurements of cortical activity evoked by viewing photographs, we constructed voxel-wise encoding models of tuning to low-level visual features. We also measured activity as subjects imagined previously memorized works of art. We then used the encoding models to determine if putative low-level visual features encoded in this activity could pick out the imagined artwork from among thousands of other randomly selected images. We show that mental images can be accurately identified in this way; moreover, mental image identification accuracy depends upon the degree of tuning to low-level visual features in the voxels selected for decoding. These results directly confirm the hypothesis that low-level visual features are encoded during mental imagery of complex scenes. Our work also points to novel forms of brain–machine interaction: we provide a proof-of-concept demonstration of an internet image search guided by mental imagery.
•A model of representation in early visual cortex decodes mental images of complex scenes.•Mental imagery depends directly upon the encoding of low-level visual features.•Low-level visual features of mental images are encoded by activity in early visual cortex.•Depictive theories of mental imagery are strongly supported by our results.•Brain activity evoked by mental imagery can be used to guide internet image search. | 
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23  | 
| ISSN: | 1053-8119 1095-9572 1095-9572  | 
| DOI: | 10.1016/j.neuroimage.2014.10.018 |