Dynamic Distribution Alignment With Dual-Subspace Mapping for Cross-Subject Driver Mental State Detection

For the detection of electroencephalogram-based driving mental states, it is important to utilize transfer learning to overcome individual and period differences. However, there are two challenges in existing unsupervised domain adaptation methods: 1) they ignore the geometric divergence of the sour...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on cognitive and developmental systems Vol. 14; no. 4; pp. 1705 - 1716
Main Authors Cui, Jin, Jin, Xuanyu, Hu, Hua, Zhu, Li, Ozawa, Kenji, Pan, Gang, Kong, Wanzeng
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.12.2022
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2379-8920
2379-8939
DOI10.1109/TCDS.2021.3137530

Cover

More Information
Summary:For the detection of electroencephalogram-based driving mental states, it is important to utilize transfer learning to overcome individual and period differences. However, there are two challenges in existing unsupervised domain adaptation methods: 1) they ignore the geometric divergence of the source and target domains with single subspace mapping and 2) they usually employ a fixed weight distribution to align the marginal and conditional probability distributions. In this article, we propose a dynamic distribution alignment with dual-subspace mapping (DDADSM) method for cross-subject driver mental state detection. Initially, DDADSM explores two optimally aligned subspaces for the source and target domains, which can significantly reduce the geometric shifting. Subsequently, the dynamic probability distribution alignment method is introduced to acquire the adaptive weight between the marginal and conditional distributions, which adapts to domains with wide variations. Through our experiments based on the driving mental state detection task, DDADSM demonstrated superior performance compared with state-of-the-art models.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2379-8920
2379-8939
DOI:10.1109/TCDS.2021.3137530