Multi-View Bayesian Generative Model for Multi-Subject FMRI Data on Brain Decoding of Viewed Image Categories

Brain decoding studies have demonstrated that viewed image categories can be estimated from human functional magnetic resonance imaging (fMRI) activity. However, there are still limitations with the estimation performance because of the characteristics of fMRI data and the employment of only one mod...

Full description

Saved in:
Bibliographic Details
Published inProceedings of the ... IEEE International Conference on Acoustics, Speech and Signal Processing (1998) pp. 1215 - 1219
Main Authors Akamatsu, Yusuke, Harakawa, Ryosuke, Ogawa, Takahiro, Haseyama, Miki
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.05.2020
Subjects
Online AccessGet full text
ISSN2379-190X
DOI10.1109/ICASSP40776.2020.9053022

Cover

More Information
Summary:Brain decoding studies have demonstrated that viewed image categories can be estimated from human functional magnetic resonance imaging (fMRI) activity. However, there are still limitations with the estimation performance because of the characteristics of fMRI data and the employment of only one modality extracted from viewed images. In this paper, we propose a multi-view Bayesian generative model for multi-subject fMRI data to estimate viewed image categories from fMRI activity. The proposed method derives effective representations of fMRI activity by utilizing multi-subject fMRI data. In addition, we associate fMRI activity with multiple modalities, i.e., visual features and semantic features extracted from viewed images. Experimental results show that the proposed method outperforms existing state-of-the-art methods of brain decoding.
ISSN:2379-190X
DOI:10.1109/ICASSP40776.2020.9053022