Images Structure Reconstruction from fMRI by Unsupervised Learning Based on VAE
How to reconstruct the stimulus images from fMRI signals is an important problem in the field of neuroscience. Limited by the complexity and the acquisition accuracy of brain signals, it is still very difficult to completely reconstruct the realistic images from fMRI signals by the artificial intell...
Saved in:
Published in | Artificial Neural Networks and Machine Learning - ICANN 2022 Vol. 13531; pp. 137 - 148 |
---|---|
Main Authors | , , , , |
Format | Book Chapter |
Language | English |
Published |
Switzerland
Springer
2022
Springer Nature Switzerland |
Series | Lecture Notes in Computer Science |
Subjects | |
Online Access | Get full text |
ISBN | 3031159330 9783031159336 |
ISSN | 0302-9743 1611-3349 |
DOI | 10.1007/978-3-031-15934-3_12 |
Cover
Summary: | How to reconstruct the stimulus images from fMRI signals is an important problem in the field of neuroscience. Limited by the complexity and the acquisition accuracy of brain signals, it is still very difficult to completely reconstruct the realistic images from fMRI signals by the artificial intelligence related technology. In the experiments related to brain signals stimulated by real images, we found that: as subjects, when stimulated by continuous realistic images, they are more sensitive to the high-frequency information in the stimulus images, such as image contour, color upheaval area, etc., The human vision and cerebral cortex seem to respond more strongly to these. Based on this discovery, we propose a method that pays more attention to the image structure to reconstruct images from fMRI signals. In order to fully decode the voxels in fMRI signals and solve the problem of insufficient amount of fMRI data, we use a back-to-back model based on Variational Auto-Encoder, which can decode more voxels in fMRI recordings into meaningful image features and introduce more unlabeled data to improve the overall performance of the model. Experiments demonstrate that our method performers better than other mainstream methods. |
---|---|
ISBN: | 3031159330 9783031159336 |
ISSN: | 0302-9743 1611-3349 |
DOI: | 10.1007/978-3-031-15934-3_12 |