Improving Cross-Subject Emotion Recognition Performance with an Encoder-Decoder Structure

Emotion recognition based on EEG is an important task in the field of affective computing. Due to individual differences, emotion recognition performance of cross-subject models is significantly lower than that of subject-dependent models. To minimize the degradation of emotion recognition performan...

Full description

Saved in:
Bibliographic Details
Published in2024 46th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC) Vol. 2024; pp. 1 - 4
Main Authors Cui, Haowei, Shi, Hanwen, Lu, Bao-Liang, Zheng, Wei-Long
Format Conference Proceeding Journal Article
LanguageEnglish
Published United States IEEE 01.07.2024
Subjects
Online AccessGet full text
ISSN2694-0604
DOI10.1109/EMBC53108.2024.10782033

Cover

More Information
Summary:Emotion recognition based on EEG is an important task in the field of affective computing. Due to individual differences, emotion recognition performance of cross-subject models is significantly lower than that of subject-dependent models. To minimize the degradation of emotion recognition performance due to the differences in the EEG distributions of different subjects, domain adaptation algorithms have been used to transfer the knowledge from source to target domains, and have achieved good performance in the task of cross-subject emotion recognition. However, most domain adaptation methods do not take into account the possible correspondence between the samples in the source and target domains. Therefore, we adopt an encoder-decoder architecture, the EEG converter, which utilizes the time alignment condition between the source and target domains during training. In the EEG converter structure, the encoder consists of a series of convolutional layers and max pooling layers, and the decoder consists of a series of upsampling layers and convolutional layers. We use the EEG converter to transfer the differential entropy features of EEG signals, from one subject to another, on datasets SEED, SEED-IV, and SEEDV. The results show that the transfer effect of the EEG converter significantly improves the performance of emotion recognition and outperforms existing domain adaptation algorithms.
ISSN:2694-0604
DOI:10.1109/EMBC53108.2024.10782033