Images Structure Reconstruction from fMRI by Unsupervised Learning Based on VAE

How to reconstruct the stimulus images from fMRI signals is an important problem in the field of neuroscience. Limited by the complexity and the acquisition accuracy of brain signals, it is still very difficult to completely reconstruct the realistic images from fMRI signals by the artificial intell...

Full description

Saved in:
Bibliographic Details
Published inArtificial Neural Networks and Machine Learning - ICANN 2022 Vol. 13531; pp. 137 - 148
Main Authors Zhao, Zhiwei, Jing, Haodong, Wang, Jianji, Wu, Weihua, Ma, Yongqiang
Format Book Chapter
LanguageEnglish
Published Switzerland Springer 2022
Springer Nature Switzerland
SeriesLecture Notes in Computer Science
Subjects
Online AccessGet full text
ISBN3031159330
9783031159336
ISSN0302-9743
1611-3349
DOI10.1007/978-3-031-15934-3_12

Cover

More Information
Summary:How to reconstruct the stimulus images from fMRI signals is an important problem in the field of neuroscience. Limited by the complexity and the acquisition accuracy of brain signals, it is still very difficult to completely reconstruct the realistic images from fMRI signals by the artificial intelligence related technology. In the experiments related to brain signals stimulated by real images, we found that: as subjects, when stimulated by continuous realistic images, they are more sensitive to the high-frequency information in the stimulus images, such as image contour, color upheaval area, etc., The human vision and cerebral cortex seem to respond more strongly to these. Based on this discovery, we propose a method that pays more attention to the image structure to reconstruct images from fMRI signals. In order to fully decode the voxels in fMRI signals and solve the problem of insufficient amount of fMRI data, we use a back-to-back model based on Variational Auto-Encoder, which can decode more voxels in fMRI recordings into meaningful image features and introduce more unlabeled data to improve the overall performance of the model. Experiments demonstrate that our method performers better than other mainstream methods.
ISBN:3031159330
9783031159336
ISSN:0302-9743
1611-3349
DOI:10.1007/978-3-031-15934-3_12