Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-Based Emotion Recognition

Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major challenge in this field, limiting the widespread use of EEG-based emotion recognition. In this paper, a semi-supervised D ual-stream S elf-atte...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on affective computing Vol. 16; no. 1; pp. 290 - 305
Main Authors Ye, Weishan, Zhang, Zhiguo, Teng, Fei, Zhang, Min, Wang, Jianhong, Ni, Dong, Li, Fali, Xu, Peng, Liang, Zhen
Format Journal Article
LanguageEnglish
Published Piscataway IEEE 01.01.2025
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN1949-3045
1949-3045
DOI10.1109/TAFFC.2024.3433470

Cover

Abstract Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major challenge in this field, limiting the widespread use of EEG-based emotion recognition. In this paper, a semi-supervised D ual-stream S elf-attentive A dversarial G raph C ontrastive learning framework (termed as DS-AGC ) is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition. The DS-AGC framework includes two parallel streams for extracting non-structural and structural EEG features. The non-structural stream incorporates a semi-supervised multi-domain adaptation method to alleviate distribution discrepancy among labeled source domain, unlabeled source domain, and unknown target domain. The structural stream develops a graph contrastive learning method to extract effective graph-based feature representation from multiple EEG channels in a semi-supervised manner. Further, a self-attentive fusion module is developed for feature fusion, sample selection, and emotion recognition, which highlights EEG features more relevant to emotions and data samples in the labeled source domain that are closer to the target domain. Extensive experiments are conducted on four benchmark databases (SEED, SEED-IV, SEED-V, and FACED) using a semi-supervised cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show that the proposed model outperforms existing methods under different incomplete label conditions with an average improvement of 2.17%, which demonstrates its effectiveness in addressing the label scarcity problem in cross-subject EEG-based emotion recognition.
AbstractList Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major challenge in this field, limiting the widespread use of EEG-based emotion recognition. In this paper, a semi-supervised D ual-stream S elf-attentive A dversarial G raph C ontrastive learning framework (termed as DS-AGC ) is proposed to tackle the challenge of limited labeled data in cross-subject EEG-based emotion recognition. The DS-AGC framework includes two parallel streams for extracting non-structural and structural EEG features. The non-structural stream incorporates a semi-supervised multi-domain adaptation method to alleviate distribution discrepancy among labeled source domain, unlabeled source domain, and unknown target domain. The structural stream develops a graph contrastive learning method to extract effective graph-based feature representation from multiple EEG channels in a semi-supervised manner. Further, a self-attentive fusion module is developed for feature fusion, sample selection, and emotion recognition, which highlights EEG features more relevant to emotions and data samples in the labeled source domain that are closer to the target domain. Extensive experiments are conducted on four benchmark databases (SEED, SEED-IV, SEED-V, and FACED) using a semi-supervised cross-subject leave-one-subject-out cross-validation evaluation protocol. The results show that the proposed model outperforms existing methods under different incomplete label conditions with an average improvement of 2.17%, which demonstrates its effectiveness in addressing the label scarcity problem in cross-subject EEG-based emotion recognition.
Author Liang, Zhen
Ni, Dong
Xu, Peng
Zhang, Zhiguo
Ye, Weishan
Wang, Jianhong
Li, Fali
Teng, Fei
Zhang, Min
Author_xml – sequence: 1
  givenname: Weishan
  orcidid: 0009-0004-9971-1482
  surname: Ye
  fullname: Ye, Weishan
  email: 2110246024@email.szu.edu.cn
  organization: School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
– sequence: 2
  givenname: Zhiguo
  orcidid: 0000-0001-7992-7965
  surname: Zhang
  fullname: Zhang, Zhiguo
  email: zhiguozhang@hit.edu.cn
  organization: Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
– sequence: 3
  givenname: Fei
  surname: Teng
  fullname: Teng, Fei
  email: 2021222013@email.szu.edu.cn
  organization: School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
– sequence: 4
  givenname: Min
  surname: Zhang
  fullname: Zhang, Min
  email: zhangmin2021@hit.edu.cn
  organization: Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
– sequence: 5
  givenname: Jianhong
  surname: Wang
  fullname: Wang, Jianhong
  email: wangjianhong0755@163.com
  organization: Shenzhen Mental Health Center, Shenzhen Kangning Hospital, Shenzhen, China
– sequence: 6
  givenname: Dong
  orcidid: 0000-0002-9146-6003
  surname: Ni
  fullname: Ni, Dong
  email: nidong@szu.edu.cn
  organization: School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
– sequence: 7
  givenname: Fali
  orcidid: 0000-0002-2450-4591
  surname: Li
  fullname: Li, Fali
  email: fali.li@uestc.edu.cn
  organization: Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 8
  givenname: Peng
  orcidid: 0000-0002-7932-0386
  surname: Xu
  fullname: Xu, Peng
  email: xupeng@uestc.edu.cn
  organization: Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
– sequence: 9
  givenname: Zhen
  orcidid: 0000-0002-1749-2975
  surname: Liang
  fullname: Liang, Zhen
  email: janezliang@szu.edu.cn
  organization: School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen, China
BookMark eNpNUE1PwkAQ3RhMROQPGA-beC7uR9ttj1gBTUhMBM_NdjvFJWUXdxcSj_5zW_DAXOYl8z4y7xYNjDWA0D0lE0pJ_rSezufFhBEWT3jMeSzIFRrSPM4jTuJkcIFv0Nj7LemGc54yMUS_K9jpaHXYgztqDzV-Ocg2WgUHcodX0DbRNAQwQR8BT-sjOC-dli1eOLn_woU1wUl_ui5BOqPNBjfW4cJZ7zvbagsq4NlsET3L3n22s0Fbgz9A2Y3RPb5D141sPYz_9wh9zmfr4jVavi_eiukyUpyKEDUiSSpWkbyGLK1jljUpCJ4nKpGcVhVP60yIWoKIaa4qpTrMEsGoJDxRKcn4CD2efffOfh_Ah3JrD850kWUXwDjN07RnsTNL9R84aMq90zvpfkpKyr7t8tR22bdd_rfdiR7OIg0AF4KU5Akl_A--AX4i
CODEN ITACBQ
Cites_doi 10.29172/7c2a6982-6d72-4cd8-bba6-2fccb06a7011
10.1109/ICOT.2017.8336126
10.1609/aaai.v35i1.16169
10.1007/s12559-022-10016-4
10.1109/TAFFC.2020.2994159
10.1007/s10994-023-06324-x
10.1109/TIM.2020.3006611
10.1109/TCYB.2018.2797176
10.1609/aaai.v32i1.11496
10.7551/mitpress/7503.003.0022
10.1109/TAFFC.2022.3210441
10.1109/TCDS.2020.3007453
10.1109/TCDS.2021.3098842
10.1088/1741-2552/acae06
10.1109/TAFFC.2022.3189222
10.1109/TCDS.2021.3071170
10.1007/s10994-009-5152-4
10.1109/TNN.2010.2091281
10.1609/aaai.v30i1.10306
10.1109/CVPR.2013.448
10.1023/A:1018628609742
10.3389/fnins.2021.778488
10.1109/TAFFC.2018.2817622
10.1109/ICASSP43922.2022.9746528
10.3389/fncom.2019.00053
10.1109/TCDS.2020.2999337
10.1016/j.cmpb.2016.08.010
10.1093/bib/bbab109
10.1109/TAFFC.2022.3164516
10.1109/TAFFC.2017.2660485
10.1109/TNSRE.2021.3111689
10.1109/TCDS.2019.2949306
10.1016/S0003-2670(01)95359-0
10.1007/978-3-319-49409-8_35
10.1109/BIBE.2014.26
10.1371/journal.pcbi.1009284
10.1016/j.ins.2022.07.121
10.1109/TCYB.2019.2904052
10.1109/ICCV.2013.368
10.1007/s10994-019-05855-6
10.1109/TAMD.2015.2431497
10.1109/NER.2013.6695876
10.1016/j.compbiomed.2021.105048
10.4310/SII.2009.v2.n3.a8
10.1109/TAFFC.2023.3288118
10.1007/978-3-030-04221-9_25
10.1109/CVPR.2012.6247911
10.1016/j.neuroimage.2015.02.015
10.5555/2946645.2946704
10.1038/s41597-023-02650-w
10.1109/TBME.2013.2253608
10.1109/TAFFC.2020.3013711
10.1109/TNSRE.2023.3236434
10.1007/978-3-030-04221-9_36
10.48550/ARXIV.1706.03762
10.1109/IEMBS.2010.5627125
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2025
DBID 97E
RIA
RIE
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
DOI 10.1109/TAFFC.2024.3433470
DatabaseName IEEE Xplore (IEEE)
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
Computer and Information Systems Abstracts
Technology Research Database
ProQuest Computer Science Collection
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
DatabaseTitle CrossRef
Computer and Information Systems Abstracts
Technology Research Database
Computer and Information Systems Abstracts – Academic
Advanced Technologies Database with Aerospace
ProQuest Computer Science Collection
Computer and Information Systems Abstracts Professional
DatabaseTitleList Computer and Information Systems Abstracts

Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 1949-3045
EndPage 305
ExternalDocumentID 10_1109_TAFFC_2024_3433470
10609510
Genre orig-research
GrantInformation_xml – fundername: Shenzhen Science and Technology Research and Development Fund for Sustainable Development Project
  grantid: KCXFZ20201221173613036
– fundername: Shenzhen-Hong Kong Institute of Brain Science-Shenzhen Fundamental Research Institutions
  grantid: 2022SHIBS0003
– fundername: Shenzhen University-Lingnan University Joint Research Programme
– fundername: Medical-Engineering Interdisciplinary Research Foundation of Shenzhen University
  grantid: 2024YG008
– fundername: National Natural Science Foundation of China
  grantid: 62276169; 62071310; 82272114
  funderid: 10.13039/501100001809
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABJNI
ABQJQ
ABVLG
AENEX
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
HZ~
IEDLZ
IFIPE
IPLJI
JAVBF
M43
O9-
OCL
PQQKQ
RIA
RIE
RNI
RZB
AAYXX
CITATION
7SC
8FD
JQ2
L7M
L~C
L~D
ID FETCH-LOGICAL-c317t-f755b2b09de86d428f6e7395c5a31bb36d877dae7419cbcc7da25721a035c6083
IEDL.DBID RIE
ISSN 1949-3045
IngestDate Mon Jun 30 12:19:45 EDT 2025
Wed Oct 01 06:40:06 EDT 2025
Wed Aug 27 01:46:38 EDT 2025
IsPeerReviewed true
IsScholarly true
Issue 1
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c317t-f755b2b09de86d428f6e7395c5a31bb36d877dae7419cbcc7da25721a035c6083
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ORCID 0000-0002-7932-0386
0000-0001-7992-7965
0000-0002-1749-2975
0000-0002-9146-6003
0009-0004-9971-1482
0000-0002-2450-4591
PQID 3172319668
PQPubID 2040414
PageCount 16
ParticipantIDs crossref_primary_10_1109_TAFFC_2024_3433470
proquest_journals_3172319668
ieee_primary_10609510
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2025-01-01
PublicationDateYYYYMMDD 2025-01-01
PublicationDate_xml – month: 01
  year: 2025
  text: 2025-01-01
  day: 01
PublicationDecade 2020
PublicationPlace Piscataway
PublicationPlace_xml – name: Piscataway
PublicationTitle IEEE transactions on affective computing
PublicationTitleAbbrev TAFFC
PublicationYear 2025
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
ref57
ref12
ref56
ref15
ref59
ref14
ref58
ref53
ref52
ref11
ref55
ref10
ref54
Chen (ref28)
ref17
ref16
ref19
Zhang (ref71)
Berthelot (ref4)
Sohn (ref5)
ref51
ref45
ref48
ref42
ref41
ref43
Devlin (ref29) 2018
ref49
ref8
ref7
ref9
ref3
Zheng (ref18)
ref40
ref35
Mohsenvand (ref32) 2020; 136
ref37
ref36
ref31
ref30
ref33
ref2
ref1
ref39
ref38
Tzeng (ref70) 2014
Zhao (ref47)
David (ref44)
Van der Maaten (ref73) 2008; 9
ref24
ref68
ref23
ref67
ref26
ref25
ref69
Berthelot (ref6) 2021
ref20
ref64
ref22
ref66
ref21
ref65
ref27
Mika (ref63)
Defferrard (ref50)
Albuquerque (ref46) 2019
ref60
Chen (ref72) 2023
You (ref34)
ref62
ref61
References_xml – start-page: 18 408
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref71
  article-title: FlexMatch: Boosting semi-supervised learning with curriculum pseudo labeling
– ident: ref64
  doi: 10.29172/7c2a6982-6d72-4cd8-bba6-2fccb06a7011
– year: 2014
  ident: ref70
  article-title: Deep domain confusion: Maximizing for domain invariance
– ident: ref24
  doi: 10.1109/ICOT.2017.8336126
– ident: ref39
  doi: 10.1609/aaai.v35i1.16169
– ident: ref22
  doi: 10.1007/s12559-022-10016-4
– ident: ref40
  doi: 10.1109/TAFFC.2020.2994159
– ident: ref48
  doi: 10.1007/s10994-023-06324-x
– volume: 9
  start-page: 1
  issue: 11
  year: 2008
  ident: ref73
  article-title: Visualizing data using t-SNE
  publication-title: J. Mach. Learn. Res.
– year: 2019
  ident: ref46
  article-title: Generalizing to unseen domains via distribution matching
– ident: ref14
  doi: 10.1109/TIM.2020.3006611
– start-page: 2732
  volume-title: Proc. 25th Int. Joint Conf. Artif. Intell.
  ident: ref18
  article-title: Personalizing EEG-based affective models with transfer learning
– ident: ref52
  doi: 10.1109/TCYB.2018.2797176
– ident: ref12
  doi: 10.1609/aaai.v32i1.11496
– ident: ref45
  doi: 10.7551/mitpress/7503.003.0022
– start-page: 1725
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref28
  article-title: Simple and deep graph convolutional networks
– year: 2018
  ident: ref29
  article-title: BERT: Pre-training of deep bidirectional transformers for language understanding
– ident: ref7
  doi: 10.1109/TAFFC.2022.3210441
– ident: ref58
  doi: 10.1109/TCDS.2020.3007453
– ident: ref59
  doi: 10.1109/TCDS.2021.3098842
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref4
  article-title: MixMatch: A holistic approach to semi-supervised learning
– ident: ref21
  doi: 10.1088/1741-2552/acae06
– start-page: 5812
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref34
  article-title: Graph contrastive learning with augmentations
– ident: ref27
  doi: 10.1109/TAFFC.2022.3189222
– ident: ref53
  doi: 10.1109/TCDS.2021.3071170
– ident: ref43
  doi: 10.1007/s10994-009-5152-4
– ident: ref61
  doi: 10.1109/TNN.2010.2091281
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref50
  article-title: Convolutional neural networks on graphs with fast localized spectral filtering
– ident: ref66
  doi: 10.1609/aaai.v30i1.10306
– ident: ref49
  doi: 10.1109/CVPR.2013.448
– ident: ref60
  doi: 10.1023/A:1018628609742
– ident: ref42
  doi: 10.3389/fnins.2021.778488
– ident: ref16
  doi: 10.1109/TAFFC.2018.2817622
– ident: ref3
  doi: 10.1109/ICASSP43922.2022.9746528
– ident: ref13
  doi: 10.3389/fncom.2019.00053
– ident: ref37
  doi: 10.1109/TCDS.2020.2999337
– volume: 136
  start-page: 238
  year: 2020
  ident: ref32
  article-title: Contrastive representation learning for electroencephalogram classification
  publication-title: Mach. Learn. Health
– ident: ref10
  doi: 10.1016/j.cmpb.2016.08.010
– ident: ref30
  doi: 10.1093/bib/bbab109
– ident: ref33
  doi: 10.1109/TAFFC.2022.3164516
– ident: ref11
  doi: 10.1109/TAFFC.2017.2660485
– ident: ref57
  doi: 10.1109/TNSRE.2021.3111689
– ident: ref26
  doi: 10.1109/TCDS.2019.2949306
– ident: ref68
  doi: 10.1016/S0003-2670(01)95359-0
– ident: ref69
  doi: 10.1007/978-3-319-49409-8_35
– ident: ref2
  doi: 10.1109/BIBE.2014.26
– ident: ref31
  doi: 10.1371/journal.pcbi.1009284
– year: 2021
  ident: ref6
  article-title: AdaMatch: A unified approach to semi-supervised learning and domain adaptation
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref63
  article-title: Kernel PCA and de-noising in feature spaces
– year: 2023
  ident: ref72
  article-title: SoftMatch: Addressing the quantity-quality trade-off in semi-supervised learning
– ident: ref15
  doi: 10.1016/j.ins.2022.07.121
– ident: ref41
  doi: 10.1109/TCYB.2019.2904052
– start-page: 596
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref5
  article-title: FixMatch: Simplifying semi-supervised learning with consistency and confidence
– ident: ref62
  doi: 10.1109/ICCV.2013.368
– ident: ref1
  doi: 10.1007/s10994-019-05855-6
– ident: ref9
  doi: 10.1109/TAMD.2015.2431497
– start-page: 1
  volume-title: Proc. Int. Conf. Neural Inf. Process. Syst.
  ident: ref47
  article-title: Adversarial multiple source domain adaptation
– ident: ref8
  doi: 10.1109/NER.2013.6695876
– ident: ref23
  doi: 10.1016/j.compbiomed.2021.105048
– ident: ref65
  doi: 10.4310/SII.2009.v2.n3.a8
– ident: ref56
  doi: 10.1109/TAFFC.2023.3288118
– ident: ref36
  doi: 10.1007/978-3-030-04221-9_25
– ident: ref67
  doi: 10.1109/CVPR.2012.6247911
– ident: ref20
  doi: 10.1016/j.neuroimage.2015.02.015
– ident: ref25
  doi: 10.5555/2946645.2946704
– ident: ref54
  doi: 10.1038/s41597-023-02650-w
– ident: ref19
  doi: 10.1109/TBME.2013.2253608
– ident: ref35
  doi: 10.1109/TAFFC.2020.3013711
– ident: ref17
  doi: 10.1109/TNSRE.2023.3236434
– ident: ref38
  doi: 10.1007/978-3-030-04221-9_36
– start-page: 129
  volume-title: Proc. 13th Int. Conf. Artif. Intell. Statist.
  ident: ref44
  article-title: Impossibility theorems for domain adaptation
– ident: ref51
  doi: 10.48550/ARXIV.1706.03762
– ident: ref55
  doi: 10.1109/IEMBS.2010.5627125
SSID ssj0000333627
Score 2.4508994
Snippet Electroencephalography (EEG) is an objective tool for emotion recognition with promising applications. However, the scarcity of labeled data remains a major...
SourceID proquest
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 290
SubjectTerms Brain modeling
Data models
domain adaption
EEG
Effectiveness
Electroencephalography
Emotion recognition
Emotions
Feature extraction
graph contrastive learning
Graphical representations
Labels
Learning
semi-supervised learning
Streams
Transfer learning
Title Semi-Supervised Dual-Stream Self-Attentive Adversarial Graph Contrastive Learning for Cross-Subject EEG-Based Emotion Recognition
URI https://ieeexplore.ieee.org/document/10609510
https://www.proquest.com/docview/3172319668
Volume 16
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 1949-3045
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000333627
  issn: 1949-3045
  databaseCode: RIE
  dateStart: 20100101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T8MwELaAiYVnEeUlD2zIJYkd2x1LaUFIdKAgsUV-BSHoQ23CwMY_x-ckCIGQ2Cwltmyfff58vrsPodNcCSeE1IQnMiLMMEpknDPikaniBgLUYghOvh3x6wd285g-1sHqIRbGORecz1wHiuEt385MCaYyv8N5QASraFVIXgVrfRlUIkq9MhZNYEzUPb_vDYd9fwVMWIcyShkQEn87fAKbyi8VHM6V4SYaNT2q3EleOmWhO-b9R7LGf3d5C23UCBP3qiWxjVbcdAdtNuwNuN7Mu-hj7CbPZFzOQVssncWXpXol8EqtJnjsXnPSKwrwJXpzONA2LxUsVnwFKa4xZLVaqGX4WudofcIeAOM-jNw3q8HAgweDK3KhoPVBxReE7xqPpdm0hR6Gg_v-NakJGYjxMKMguUhTneioa53k1l9ccu7goc-kisZaU26lEFY5j1K6Rhvjy14jJLGKaGq4B3t7aG06m7p9hBWVgtpIM5kbxm2iqE6FFYkXU6z8mdlGZ42ksnmVdyML95WomwW5ZiDXrJZrG7Vg6r_9Wc16Gx010s3qvbnM_FASUDxcHvxR7RCtJ0DzGywtR2itWJTu2GOPQp-ENfcJc8TXGw
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NT9swFH9icNguAzYQHV8-7Da5S2LHTo9daSkb9LAWiVvkryAEtIgmHLjxn8_PSRAaQuJmKbFl-9nPPz-_934A3wslnZSZpiLJIsoNZzSLC049MlXCYIBajMHJZxMxPue_L9KLJlg9xMI454LzmetiMbzl24Wp0FTmd7gIiOADrKWc87QO13o2qUSMeXUs29CYqPdz1h-NBv4SmPAu44xxpCR-cfwEPpVXSjicLKN1mLR9qh1KrrtVqbvm8b90je_u9AZ8bjAm6deLYhNW3PwLrLf8DaTZzl_haepur-i0ukN9sXSWHFXqhuI7tbolU3dT0H5ZojfRgyOBuHmpcLmSY0xyTTCv1b1ahq9NltZL4iEwGeDIfbMaTTxkODymvxS2PqwZg8jf1mdpMd-C89FwNhjThpKBGg80SlrINNWJjnrWZcL6q0shHD71mVSxWGsmbCalVc7jlJ7Rxviy1wlJrCKWGuHh3jaszhdztwNEsUwyG2meFYYLmyimU2ll4sUUK39qduBHK6n8rs68kYcbS9TLg1xzlGveyLUDWzj1L_6sZ70De61082Z3LnM_lARVj8i-vVHtED6OZ2en-enJ5M8ufEqQ9DfYXfZgtbyv3L5HIqU-COvvH92e2mg
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Semi-Supervised+Dual-Stream+Self-Attentive+Adversarial+Graph+Contrastive+Learning+for+Cross-Subject+EEG-Based+Emotion+Recognition&rft.jtitle=IEEE+transactions+on+affective+computing&rft.au=Ye%2C+Weishan&rft.au=Zhang%2C+Zhiguo&rft.au=Teng%2C+Fei&rft.au=Zhang%2C+Min&rft.date=2025-01-01&rft.issn=1949-3045&rft.eissn=1949-3045&rft.volume=16&rft.issue=1&rft.spage=290&rft.epage=305&rft_id=info:doi/10.1109%2FTAFFC.2024.3433470&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TAFFC_2024_3433470
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1949-3045&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1949-3045&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1949-3045&client=summon