Advancing Cross-Subject Domain Generalization in Brain-Computer Interfaces With Multiadversarial Strategies

A cross-subject domain generalization (DG) approach with multiadversarial strategies (DGMA) is introduced to reduce brain-computer interfaces (BCIs) systems' dependency on high-quality, subject-specific electroencephalographic (EEG) data, making it adaptable to unseen domains. DGMA leverages an...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 74; pp. 1 - 12
Main Authors Liu, Yici, Qin, Lang, Chen, Xin, Le Bouquin Jeannes, Regine, Louis Coatrieux, Jean, Shu, Huazhong
Format Journal Article
LanguageEnglish
Published New York IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text
ISSN0018-9456
1557-9662
1557-9662
DOI10.1109/TIM.2025.3566804

Cover

More Information
Summary:A cross-subject domain generalization (DG) approach with multiadversarial strategies (DGMA) is introduced to reduce brain-computer interfaces (BCIs) systems' dependency on high-quality, subject-specific electroencephalographic (EEG) data, making it adaptable to unseen domains. DGMA leverages annotated training data from other subjects and consists of three modules: 1) prefeature extraction (PFE), enhancing EEG signal separability through preprocessing, data augmentation, and tangent space mapping; 2) distribution feature updater (DFU), aligning intersubject feature distributions with marginal maximum mean discrepancy (MMD); and 3) multiadversarial training (MAT), initially using gradient reversal layer (GRL) to amplify domain differences and classification loss, allowing the model to learn diverse domain-specific features before minimizing these differences to balance domain transferability and discriminability. DGMA is capable of better capturing domain-specific features while achieving stronger generalization compared with traditional methods focused solely on minimizing domain differences. Validated on four motor imagery datasets, DGMA achieved state-of-the-art accuracies of 76.1% on BCI Competition IV 2a and 72.4% on the 002-2014 dataset. Additional tests on a private fatigue dataset and the SEED dataset yielded accuracies of 99.5% and 86.6%, respectively. The code can be found at https://github.com/liuyici/DGMA-BCI
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0018-9456
1557-9662
1557-9662
DOI:10.1109/TIM.2025.3566804