IF-MMCL: an individual focused network with multi-view and multi-modal contrastive learning for cross-subject emotion recognition

Electroencephalography (EEG) usage in emotion recognition has garnered significant interest in brain-computer interface (BCI) research. Nevertheless, in order to develop an effective model for emotion identification, features need to be extracted from EEG data in terms of multi-view. In order to tac...

Full description

Saved in:
Bibliographic Details
Published inMedical & biological engineering & computing
Main Authors Zhou, Qiaoli, Song, Jiawen, Zhao, Yi, Zhang, Shun, Du, Qiang, Ke, Li
Format Journal Article
LanguageEnglish
Published United States 28.08.2025
Subjects
Online AccessGet full text
ISSN0140-0118
1741-0444
1741-0444
DOI10.1007/s11517-025-03430-x

Cover

More Information
Summary:Electroencephalography (EEG) usage in emotion recognition has garnered significant interest in brain-computer interface (BCI) research. Nevertheless, in order to develop an effective model for emotion identification, features need to be extracted from EEG data in terms of multi-view. In order to tackle the problems of multi-feature interaction and domain adaptation, we suggest an innovative network, IF-MMCL, which leverages multi-modal data in multi-view representation and integrates an individual focused network. In our approach, we build an individual focused network with multi-view that utilizes individual focused contrastive learning to improve model generalization. The network employs different structures for multi-view feature extraction and uses multi-feature relationship computation to identify the relationships between features from various views and modalities. Our model is validated using four public emotion datasets, each containing various emotion classification tasks. In leave-one-subject-out experiments, IF-MMCL performs better than the previous methods in model generalization with limited data.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ISSN:0140-0118
1741-0444
1741-0444
DOI:10.1007/s11517-025-03430-x