Multi-modal supervised domain adaptation with a multi-level alignment strategy and consistent decision boundaries for cross-subject emotion recognition from EEG and eye movement signals
Multi-modal emotion recognition systems from Electroencephalogram (EEG) and eye tracking signals have overcome the limitation of incomplete information expressed by a single modality, leveraging the complementarity of multiple modal information. However, the applicability of these systems is still r...
Saved in:
Published in | Knowledge-based systems Vol. 315; p. 113238 |
---|---|
Main Authors | , |
Format | Journal Article |
Language | English |
Published |
Elsevier B.V
22.04.2025
|
Subjects | |
Online Access | Get full text |
ISSN | 0950-7051 |
DOI | 10.1016/j.knosys.2025.113238 |
Cover
Summary: | Multi-modal emotion recognition systems from Electroencephalogram (EEG) and eye tracking signals have overcome the limitation of incomplete information expressed by a single modality, leveraging the complementarity of multiple modal information. However, the applicability of these systems is still restricted to new users since signal patterns vary across subjects, decreasing the recognition performance. In this sense, supervised domain adaptation has emerged as an effective method to solve such problem by reducing distribution differences between multi-modal signals from known subjects and a new one. Nevertheless, existing works exhibit a sub-optimal feature distribution alignment, avoiding a correct knowledge transfer. Likewise, although multi-modal approaches present robustness by learning a shared latent space, EEG data are still exposed to noise and perturbations, producing misclassifications in sensitive decision boundaries. To solve these issues, we introduced a multi-modal supervised domain adaptation method, named Multi-level Alignment and Consistent Decision Boundaries (MACDB), which introduces a three-fold strategy for multi-level feature alignment comprising modality-specific normalization, angular cosine distance, and Joint Maximum Mean Discrepancy to achieve (1) an alignment per modality, (2) an alignment between modalities, and (3) an alignment across domains. Also, robust decision boundaries are encouraged over the EEG feature space by ensuring consistent predictions with respect to adversarial perturbations on EEG data. We evaluated our proposal on three public datasets, SEED, SEED-IV and SEED-V, employing leave-one-subject-out cross-validation. Experiments showed that the effectiveness of our proposal achieves an average accuracy of 86.68%, 85.03%, and 86.48% on SEED, SEED-IV, and SEED-V across the three available sessions, outperforming the state-of-the-art results. |
---|---|
ISSN: | 0950-7051 |
DOI: | 10.1016/j.knosys.2025.113238 |