MVC-former adaptation: A multi-view convolution transformer-based domain adaptation framework for cross-subject motor imagery classification

Owing to individual difference, it is challenging to decode the target subject’s mental intentions by applying existing models in cross-subject Brain-Computer Interface tasks. The transfer learning methods have shown promising performance in this field, but they still suffer from poor presentations...

Full description

Saved in:
Bibliographic Details
Published inNeurocomputing (Amsterdam) Vol. 649; p. 130875
Main Authors Liang, Yining, Meng, Ming, Gao, Yunyuan, Xi, Xugang
Format Journal Article
LanguageEnglish
Published Elsevier B.V 07.10.2025
Subjects
Online AccessGet full text
ISSN0925-2312
DOI10.1016/j.neucom.2025.130875

Cover

More Information
Summary:Owing to individual difference, it is challenging to decode the target subject’s mental intentions by applying existing models in cross-subject Brain-Computer Interface tasks. The transfer learning methods have shown promising performance in this field, but they still suffer from poor presentations on temporal correlations characteristics of EEG signals. This paper proposes a multi-view convolution-transformer based domain adaptation framework for cross-subject motor imagery classification tasks. Firstly, to exploit the frequency diversity of EEG signals, we decomposed EEG signals into several overlapping frequency views and extracted frequency-related spatial and temporal features by parallel spatiotemporal convolution block. Subsequently, we use transformer blocks to extract long-range dependencies and narrow the marginal distribution between source and target domains. Eventually, a classifier and a domain discriminator were used for domain adaptation, and a mixed loss was employed to align conditional distributions. We conducted model validation on the BCI Competition IV 2a and 2b datasets and achieved average accuracies of 77.8 % and 80.1 %, respectively. The experimental results show that our proposed framework outperforms traditional deep adversarial domain adaptive methods.
ISSN:0925-2312
DOI:10.1016/j.neucom.2025.130875