Multimodal Semi-Supervised Domain Adaptation Using Cross-Modal Learning and Joint Distribution Alignment for Cross-Subject Emotion Recognition

Multimodal physiological data from electroencephalogram (EEG) and eye movement (EM) signals have been shown to be useful in effectively recognizing human emotional states. Unfortunately, individual differences reduce the applicability of existing multimodal classifiers to new users, as low performan...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on instrumentation and measurement Vol. 74; pp. 1 - 12
Main Authors Jimenez-Guarneros, Magdiel, Fuentes-Pineda, Gibran, Grande-Barreto, Jonas
Format Journal Article
LanguageEnglish
Published New York IEEE 2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0018-9456
1557-9662
DOI10.1109/TIM.2025.3551924

Cover

Abstract Multimodal physiological data from electroencephalogram (EEG) and eye movement (EM) signals have been shown to be useful in effectively recognizing human emotional states. Unfortunately, individual differences reduce the applicability of existing multimodal classifiers to new users, as low performance is usually observed. Indeed, existing works mainly focus on multimodal domain adaptation from a labeled source domain and unlabeled target domain to address the mentioned problem, transferring knowledge from known subjects to new one. However, a limited set of labeled target data has not been effectively exploited to enhance the knowledge transfer between subjects. In this article, we propose a multimodal semi-supervised domain adaptation (SSDA) method, called cross-modal learning and joint distribution alignment (CMJDA), to address the limitations of existing works, following three strategies: 1) discriminative features are exploited per modality through independent neural networks; 2) correlated features and consistent predictions are produced between modalities; and 3) marginal and conditional distributions are encouraged to be similar between the labeled source data, limited labeled target data, and abundant unlabeled target data. We conducted comparison experiments on two public benchmarks for emotion recognition, SEED-IV and SEED-V, using leave-one-out cross-validation (LOOCV). Our proposal achieves an average accuracy of 92.50%-96.13% across the three available sessions on SEED-IV and SEED-V, only including three labeled target samples per class from the first recorded trial.
AbstractList Multimodal physiological data from electroencephalogram (EEG) and eye movement (EM) signals have been shown to be useful in effectively recognizing human emotional states. Unfortunately, individual differences reduce the applicability of existing multimodal classifiers to new users, as low performance is usually observed. Indeed, existing works mainly focus on multimodal domain adaptation from a labeled source domain and unlabeled target domain to address the mentioned problem, transferring knowledge from known subjects to new one. However, a limited set of labeled target data has not been effectively exploited to enhance the knowledge transfer between subjects. In this article, we propose a multimodal semi-supervised domain adaptation (SSDA) method, called cross-modal learning and joint distribution alignment (CMJDA), to address the limitations of existing works, following three strategies: 1) discriminative features are exploited per modality through independent neural networks; 2) correlated features and consistent predictions are produced between modalities; and 3) marginal and conditional distributions are encouraged to be similar between the labeled source data, limited labeled target data, and abundant unlabeled target data. We conducted comparison experiments on two public benchmarks for emotion recognition, SEED-IV and SEED-V, using leave-one-out cross-validation (LOOCV). Our proposal achieves an average accuracy of 92.50%-96.13% across the three available sessions on SEED-IV and SEED-V, only including three labeled target samples per class from the first recorded trial.
Author Jimenez-Guarneros, Magdiel
Fuentes-Pineda, Gibran
Grande-Barreto, Jonas
Author_xml – sequence: 1
  givenname: Magdiel
  orcidid: 0000-0001-9675-7494
  surname: Jimenez-Guarneros
  fullname: Jimenez-Guarneros, Magdiel
  email: mjmnzg@gmail.com
  organization: Department of Computer Science, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico
– sequence: 2
  givenname: Gibran
  orcidid: 0000-0002-1964-8208
  surname: Fuentes-Pineda
  fullname: Fuentes-Pineda, Gibran
  email: gibranfp@unam.mx
  organization: Department of Computer Science, Instituto de Investigaciones en Matemáticas Aplicadas y en Sistemas (IIMAS), Universidad Nacional Autónoma de México (UNAM), Mexico City, Mexico
– sequence: 3
  givenname: Jonas
  orcidid: 0000-0003-3789-1479
  surname: Grande-Barreto
  fullname: Grande-Barreto, Jonas
  email: jonas.barreto385@uppuebla.edu.mx
  organization: Department of Information Technology Engineering, Universidad Politécnica de Puebla, Puebla, Mexico
BookMark eNpNkE9rwyAYh2V0sLbbfYcdhJ3TaRI1Hku7Px0tg7U9BxO1WBLNNBnsS-wzL2l72OHFlx-_R-WZgJF1VgFwj9EMY8SfdqvNLEYxmSWEYB6nV2CMCWERpzQegTFCOIt4SugNmIRwRAgxmrIx-N10VWtqJ0UFt6o20bZrlP82QUm4dLUwFs6laFrRGmfhPhh7gAvvQog2J2athLdDKKyE787YFi5NaL0puhMxr8zB1qqPtfMXctsVR1W28Ll2p86nKt3BmmG_BddaVEHdXc4p2L887xZv0frjdbWYr6MSM9JGRalxgaTGWgjGiMYpkSLRHFFZECZKjOOCCx1T2g9LaDFYKJXUOkMk0ySZgsfzvY13X50KbX50nbf9k3mCM5KkDMe8b6Fzqxz-7ZXOG29q4X9yjPLBet5bzwfr-cV6jzycEaOU-lfnMacpT_4AxreC_g
CODEN IEIMAO
Cites_doi 10.1016/j.brainresbull.2024.110901
10.1109/TCSS.2023.3314508
10.1016/j.inffus.2023.102129
10.1109/TAFFC.2023.3259010
10.1016/j.eswa.2024.124001
10.1109/TCSS.2023.3298324
10.1088/1741-2552/ac5c8d
10.1088/1741-2552/ac49a7
10.1109/CCECE58730.2023.10289007
10.1109/TETCI.2024.3406422
10.1109/TAMD.2015.2431497
10.1109/BIBM52615.2021.9669556
10.1109/TAFFC.2024.3357656
10.1109/CVPR46437.2021.01636
10.1109/TPAMI.2022.3159589
10.1109/NER.2013.6695876
10.1109/tnnls.2024.3493425
10.1016/j.compbiomed.2021.104696
10.1109/TCYB.2018.2797176
10.1016/j.knosys.2024.112669
10.1007/978-981-97-7007-6_13
10.1016/j.knosys.2021.107982
10.1109/ICCV.2019.00814
10.1007/978-3-642-35289-8
10.1016/j.neunet.2023.03.039
10.1109/NER.2019.8716943
10.1016/j.compbiomed.2023.106860
10.1016/j.cmpb.2018.04.005
10.1109/TIM.2023.3302938
10.1016/j.neucom.2021.05.064
10.1016/j.ins.2019.01.025
10.1109/JBHI.2022.3210158
10.1016/j.compbiomed.2022.105907
10.1109/TAFFC.2024.3392791
10.1109/TCYB.2019.2904052
10.1007/s11571-024-10123-y
10.1109/TCDS.2019.2949306
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
DBID 97E
RIA
RIE
AAYXX
CITATION
7SP
7U5
8FD
L7M
DOI 10.1109/TIM.2025.3551924
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CrossRef
Electronics & Communications Abstracts
Solid State and Superconductivity Abstracts
Technology Research Database
Advanced Technologies Database with Aerospace
DatabaseTitle CrossRef
Solid State and Superconductivity Abstracts
Technology Research Database
Advanced Technologies Database with Aerospace
Electronics & Communications Abstracts
DatabaseTitleList
Solid State and Superconductivity Abstracts
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Physics
EISSN 1557-9662
EndPage 12
ExternalDocumentID 10_1109_TIM_2025_3551924
10929649
Genre orig-research
GrantInformation_xml – fundername: Universidad Nacional Autónoma de México (UNAM) Posdoctoral Program (POSDOC)
  funderid: 10.13039/501100005739
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
85S
8WZ
97E
A6W
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACIWK
ACNCT
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
F5P
HZ~
H~9
IAAWW
IBMZZ
ICLAB
IDIHD
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
RIA
RIE
RNS
TN5
TWZ
VH1
VJK
AAYXX
CITATION
7SP
7U5
8FD
L7M
ID FETCH-LOGICAL-c175t-bcf1b0df1faa775f145da3f906db57ac112b9af266f26736b9662cedff8058f53
IEDL.DBID RIE
ISSN 0018-9456
IngestDate Mon Jun 30 10:07:49 EDT 2025
Wed Oct 01 06:40:28 EDT 2025
Wed Aug 27 02:05:01 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c175t-bcf1b0df1faa775f145da3f906db57ac112b9af266f26736b9662cedff8058f53
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0001-9675-7494
0000-0002-1964-8208
0000-0003-3789-1479
PQID 3185347129
PQPubID 85462
PageCount 12
ParticipantIDs proquest_journals_3185347129
crossref_primary_10_1109_TIM_2025_3551924
ieee_primary_10929649
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20250000
2025-00-00
20250101
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – year: 2025
  text: 20250000
PublicationDecade 2020
PublicationPlace New York
PublicationPlace_xml – name: New York
PublicationTitle IEEE transactions on instrumentation and measurement
PublicationTitleAbbrev TIM
PublicationYear 2025
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref35
ref12
ref34
ref15
ref14
ref36
ref31
ref11
ref33
ref10
ref32
Shu (ref30)
ref2
ref1
ref17
ref16
ref38
ref19
ref18
Montavon (ref37) 2012; 7700
ref24
ref23
ref26
ref25
ref20
ref22
ref21
ref28
ref27
ref29
ref8
ref7
ref9
ref4
ref3
ref6
ref5
References_xml – ident: ref26
  doi: 10.1016/j.brainresbull.2024.110901
– ident: ref16
  doi: 10.1109/TCSS.2023.3314508
– ident: ref9
  doi: 10.1016/j.inffus.2023.102129
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref30
  article-title: A DIRT-T approach to unsupervised domain adaptation
– ident: ref14
  doi: 10.1109/TAFFC.2023.3259010
– ident: ref38
  doi: 10.1016/j.eswa.2024.124001
– ident: ref1
  doi: 10.1109/TCSS.2023.3298324
– ident: ref10
  doi: 10.1088/1741-2552/ac5c8d
– ident: ref13
  doi: 10.1088/1741-2552/ac49a7
– ident: ref8
  doi: 10.1109/CCECE58730.2023.10289007
– ident: ref18
  doi: 10.1109/TETCI.2024.3406422
– ident: ref35
  doi: 10.1109/TAMD.2015.2431497
– ident: ref7
  doi: 10.1109/BIBM52615.2021.9669556
– ident: ref17
  doi: 10.1109/TAFFC.2024.3357656
– ident: ref11
  doi: 10.1109/CVPR46437.2021.01636
– ident: ref28
  doi: 10.1109/TPAMI.2022.3159589
– ident: ref34
  doi: 10.1109/NER.2013.6695876
– ident: ref25
  doi: 10.1109/tnnls.2024.3493425
– ident: ref5
  doi: 10.1016/j.compbiomed.2021.104696
– ident: ref6
  doi: 10.1109/TCYB.2018.2797176
– ident: ref24
  doi: 10.1016/j.knosys.2024.112669
– ident: ref12
  doi: 10.1007/978-981-97-7007-6_13
– ident: ref27
  doi: 10.1016/j.knosys.2021.107982
– ident: ref32
  doi: 10.1109/ICCV.2019.00814
– volume: 7700
  volume-title: Neural Networks: Tricks of the Trade
  year: 2012
  ident: ref37
  doi: 10.1007/978-3-642-35289-8
– ident: ref22
  doi: 10.1016/j.neunet.2023.03.039
– ident: ref33
  doi: 10.1109/NER.2019.8716943
– ident: ref3
  doi: 10.1016/j.compbiomed.2023.106860
– ident: ref4
  doi: 10.1016/j.cmpb.2018.04.005
– ident: ref23
  doi: 10.1109/TIM.2023.3302938
– ident: ref20
  doi: 10.1016/j.neucom.2021.05.064
– ident: ref29
  doi: 10.1016/j.ins.2019.01.025
– ident: ref21
  doi: 10.1109/JBHI.2022.3210158
– ident: ref2
  doi: 10.1016/j.compbiomed.2022.105907
– ident: ref15
  doi: 10.1109/TAFFC.2024.3392791
– ident: ref19
  doi: 10.1109/TCYB.2019.2904052
– ident: ref31
  doi: 10.1007/s11571-024-10123-y
– ident: ref36
  doi: 10.1109/TCDS.2019.2949306
SSID ssj0007647
Score 2.4273624
Snippet Multimodal physiological data from electroencephalogram (EEG) and eye movement (EM) signals have been shown to be useful in effectively recognizing human...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 1
SubjectTerms Adaptation
Alignment
Brain modeling
Data mining
Deep learning
electroencephalogram (EEG)
Electroencephalography
Emotion recognition
Emotional factors
Emotions
eye movement (EM)
Eye movements
Feature extraction
Knowledge transfer
Learning
Magnetic heads
multimodal semi-supervised domain adaptation (SSDA)
Neural networks
Proposals
Signal to noise ratio
Training
Title Multimodal Semi-Supervised Domain Adaptation Using Cross-Modal Learning and Joint Distribution Alignment for Cross-Subject Emotion Recognition
URI https://ieeexplore.ieee.org/document/10929649
https://www.proquest.com/docview/3185347129
Volume 74
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore
  customDbUrl:
  eissn: 1557-9662
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0007647
  issn: 0018-9456
  databaseCode: RIE
  dateStart: 19630101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1La9wwEB6aQKA9NE2a0m2SokMvPXjrlyzruORBGtgcmgRyMxo9wtKuvXS9l_6I_uaOJDukKYEeDAZ7jMxIM9-8AT6VaE2Rpy6xvs9QiRUmaKUgIOcyriukh94POb-qLm7Lyzt-NxSrh1oYa21IPrNTfxti-abTG-8qoxMufZRQbsGWqKtYrPUgdkVVxgaZGZ1gggVjTDKVX26-zskSzPmUlKs3OP7SQWGoyj-SOKiX8124GhcWs0q-Tzc9TvWvJz0b_3vlb-D1ADTZLO6MPXhh23149aj94D7shPRPvX4Lv0Md7rIzRHFtl4vkerPyQmRtDTvtlmrRsplRqxi2ZyHNgJ34f0vmgWbo0nrPVGvYZbdoe3bqO_IOw7TY7MfiPqQdMMLIAyWJLO8DYmdxkBD7NqYyde0B3J6f3ZxcJMOkhkQT_OgT1C7D1LjMKSUEd1nJjSqcTCuDXChNoA6lcgQG6BJFhWRk5doa5-qU144X72C77Vr7HhhBCCWzGrFQ0vciRI5CCFSF1bkua5zA55F3zSo25GiCIZPKhvjceD43A58ncOBZ8ei9yIUJHI3cboYju258GXlBqjqXH54hO4SX_uvRAXME2_3PjT0mSNLjx7AV_wBWEOBr
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9NAEB5BEQIOPEoRgQJ74MLBwa_1Zo9RH0pLkwNNpd6snX1UEcSOiHPhR_Cbmd21UQEhcbBkyR55rdmd-eYN8L5Ea4o8dYn1fYZKrDBBKwUBOZdxXSE99H7I-aKaXZXn1_y6L1YPtTDW2pB8Zsf-NsTyTat33lVGJ1z6KKG8C_d4WZY8lmv9EryiKmOLzIzOMAGDISqZyo_LsznZgjkfk3r1JsdvWiiMVflLFgcFc_oEFsPSYl7Jl_Guw7H-_kfXxv9e-1N43ENNNo174xncsc0-PLrVgHAf7ocEUL19Dj9CJe66NURxader5HK38WJkaw07btdq1bCpUZsYuGch0YAd-X9L5oGm79N6w1Rj2Hm7ajp27Hvy9uO02PTr6iYkHjBCyT0lCS3vBWIncZQQ-zwkM7XNAVydniyPZkk_qyHRBEC6BLXLMDUuc0oJwV1WcqMKJ9PKIBdKE6xDqRzBAbpEUSGZWbm2xrlJyieOFy9gr2kb-xIYgQglswlioaTvRogchRCoCqtzXU5wBB8G3tWb2JKjDqZMKmvic-35XPd8HsGBZ8Wt9yIXRnA4cLvuD-229oXkBSnrXL76B9k7eDBbzi_qi7PFp9fw0H8pumMOYa_7trNvCKB0-DZsy586NOO4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Multimodal+Semi-Supervised+Domain+Adaptation+Using+Cross-Modal+Learning+and+Joint+Distribution+Alignment+for+Cross-Subject+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+instrumentation+and+measurement&rft.au=Jimenez-Guarneros%2C+Magdiel&rft.au=Fuentes-Pineda%2C+Gibran&rft.au=Grande-Barreto%2C+Jonas&rft.date=2025&rft.pub=IEEE&rft.issn=0018-9456&rft.volume=74&rft.spage=1&rft.epage=12&rft_id=info:doi/10.1109%2FTIM.2025.3551924&rft.externalDocID=10929649
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0018-9456&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0018-9456&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0018-9456&client=summon