EEG emotion recognition through a domain-adversarial multi-feature fusion network
•We propose an innovative multimodal EEG emotion recognition framework, which significantly improves the generalization ability of models across test subjects through three core mechanisms: multi-layer feature fusion, cross-modal interaction and dynamic domain adaptation.•At the feature extraction l...
Saved in:
| Published in | Expert systems with applications Vol. 298; p. 129694 |
|---|---|
| Main Authors | , , , |
| Format | Journal Article |
| Language | English |
| Published |
Elsevier Ltd
01.03.2026
|
| Subjects | |
| Online Access | Get full text |
| ISSN | 0957-4174 |
| DOI | 10.1016/j.eswa.2025.129694 |
Cover
| Summary: | •We propose an innovative multimodal EEG emotion recognition framework, which significantly improves the generalization ability of models across test subjects through three core mechanisms: multi-layer feature fusion, cross-modal interaction and dynamic domain adaptation.•At the feature extraction level, we design a hierarchical convolutional architecture: the low-level uses time-domain convolution to capture local rhythmic features, the middlelayer introduces Dilated Convolution to expand the receptive field, and the high-level extracts global context information through Dilated Convolution and residual connection.•In order to solve the problem of individual differences, we propose a cross-modal feature projection mechanism: normalize age, generate individual feature vectors after gender and education level are encoded, and align with EEG feature maps through spatiotemporal dimensional replication and expansion.•This framework significantly enhances the generalization performance of cross-subject emotion recognition by incorporating adversarial training via the Gradient Reversal Layer (GRL) and statistical alignment through multi-kernel Maximum Mean Discrepancy (MKMMD) loss minimization. This strategy forces the network to suppress domain -specific noise caused by individual differences while preserving emotion-discriminative features.
Accurate recognition of EEG signals linked to emotions is crucial for neuroscience and human-computer interaction. However, variability in EEG emotion recognition among individuals results in inconsistent feature distributions and limited generalization across subjects. To enhance the robustness of the model, we propose a deep learning approach integrating a domain adversarial migration network with an attention mechanism. Initially, a feature extractor with a hierarchical architecture (low-medium-high levels) is employed to capture multi-scale EEG features, which are then normalized for age and encoded for genderand education before being aligned with EEG features through spatio-temporal replication. Subsequently, global distribution alignment is achieved using multi-kernel maximum mean difference (MK-MMD), subdomain adversarial alignment is accomplished with a gradient inversion layer (GRL), and decision boundary clarity is enhanced through joint emotion classification loss. The generalization capability and effectiveness of the model are validated using the DEAP and DREAMER datasets, offering insights for cross-subject emotion recognition research and applications. |
|---|---|
| ISSN: | 0957-4174 |
| DOI: | 10.1016/j.eswa.2025.129694 |