SDC-Net: A Domain Adaptation Framework with Semantic-Dynamic Consistency for Cross-Subject EEG Emotion Recognition
Emotion recognition based on electroencephalography (EEG) holds significant promise for affective brain-computer interfaces (aBCIs). However, its practical deployment faces challenges due to the variability within inter-subject and the scarcity of labeled data in target domains. To overcome these li...
Saved in:
| Main Authors | , , , , , , , , |
|---|---|
| Format | Journal Article |
| Language | English |
| Published |
24.09.2025
|
| Subjects | |
| Online Access | Get full text |
| DOI | 10.48550/arxiv.2507.17524 |
Cover
| Summary: | Emotion recognition based on electroencephalography (EEG) holds significant promise for affective brain-computer interfaces (aBCIs). However, its practical deployment faces challenges due to the variability within inter-subject and the scarcity of labeled data in target domains. To overcome these limitations, we propose SDC-Net, a novel Semantic-Dynamic Consistency domain adaptation network for fully label-free cross-subject EEG emotion recognition. First, we introduce a Same-Subject Same-Trial Mixup strategy that generates augmented samples through intra-trial interpolation, enhancing data diversity while explicitly preserving individual identity to mitigate label ambiguity. Second, we construct a dynamic distribution alignment module within the Reproducing Kernel Hilbert Space (RKHS), jointly aligning marginal and conditional distributions through multi-objective kernel mean embedding, and leveraging a confidence-aware pseudo-labeling strategy to ensure stable adaptation. Third, we propose a dual-domain similarity consistency learning mechanism that enforces cross-domain structural constraints based on latent pairwise similarities, facilitating semantic boundary learning without reliance on temporal synchronization or label priors. To validate the effectiveness and robustness of the proposed SDC-Net, extensive experiments are conducted on three widely used EEG benchmark datasets: SEED, SEED-IV, and FACED. Comparative results against existing unsupervised domain adaptation methods demonstrate that SDC-Net achieves state-of-the-art performance in emotion recognition under both cross-subject and cross-session conditions. This advancement significantly improves the accuracy and generalization capability of emotion decoding, laying a solid foundation for real-world applications of personalized aBCIs. The source code is available at: https://github.com/XuanSuTrum/SDC-Net. |
|---|---|
| DOI: | 10.48550/arxiv.2507.17524 |