Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization

Domain generalization (DG) aims to generalize the knowledge learned from multiple source domains to unseen target domains. Existing DG techniques can be subsumed under two broad categories, i.e., domain-invariant representation learning and domain manipulation. Nevertheless, it is extremely difficul...

Full description

Saved in:
Bibliographic Details
Published inIEEE Transactions on Multimedia Vol. 26; pp. 245 - 255
Main Authors Niu, Ziwei, Yuan, Junkun, Ma, Xu, Xu, Yingying, Liu, Jing, Chen, Yen-Wei, Tong, Ruofeng, Lin, Lanfen
Format Journal Article
LanguageEnglish
Japanese
Published IEEE 2024
Institute of Electrical and Electronics Engineers (IEEE)
Subjects
Online AccessGet full text
ISSN1520-9210
1941-0077
DOI10.1109/TMM.2023.3263549

Cover

Abstract Domain generalization (DG) aims to generalize the knowledge learned from multiple source domains to unseen target domains. Existing DG techniques can be subsumed under two broad categories, i.e., domain-invariant representation learning and domain manipulation. Nevertheless, it is extremely difficult to explicitly augment or generate the unseen target data. And when source domain variety increases, developing a domain-invariant model by simply aligning more domain-specific information becomes more challenging. In this article, we propose a simple yet effective method for domain generalization, named Knowledge Distillation based Domain-invariant Representation Learning (KDDRL), that learns domain-invariant representation while encouraging the model to maintain domain-specific features, which recently turned out to be effective for domain generalization. To this end, our method incorporates multiple auxiliary student models and one student leader model to perform a two-stage distillation. In the first-stage distillation, each domain-specific auxiliary student treats the ensemble of other auxiliary students' predictions as a target, which helps to excavate the domain-invariant representation. Also, we present an error removal module to prevent the transfer of faulty information by eliminating incorrect predictions compared to the true labels. In the second-stage distillation, the student leader model with domain-specific features combines the domain-invariant representation learned from the group of auxiliary students to make the final prediction. Extensive experiments and in-depth analysis on popular DG benchmark datasets demonstrate that our KDDRL significantly outperforms the current state-of-the-art methods.
AbstractList Domain generalization (DG) aims to generalize the knowledge learned from multiple source domains to unseen target domains. Existing DG techniques can be subsumed under two broad categories, i.e., domain-invariant representation learning and domain manipulation. Nevertheless, it is extremely difficult to explicitly augment or generate the unseen target data. And when source domain variety increases, developing a domain-invariant model by simply aligning more domain-specific information becomes more challenging. In this article, we propose a simple yet effective method for domain generalization, named Knowledge Distillation based Domain-invariant Representation Learning (KDDRL), that learns domain-invariant representation while encouraging the model to maintain domain-specific features, which recently turned out to be effective for domain generalization. To this end, our method incorporates multiple auxiliary student models and one student leader model to perform a two-stage distillation. In the first-stage distillation, each domain-specific auxiliary student treats the ensemble of other auxiliary students' predictions as a target, which helps to excavate the domain-invariant representation. Also, we present an error removal module to prevent the transfer of faulty information by eliminating incorrect predictions compared to the true labels. In the second-stage distillation, the student leader model with domain-specific features combines the domain-invariant representation learned from the group of auxiliary students to make the final prediction. Extensive experiments and in-depth analysis on popular DG benchmark datasets demonstrate that our KDDRL significantly outperforms the current state-of-the-art methods.
Author Niu, Ziwei
Xu, Yingying
Chen, Yen-Wei
Liu, Jing
Ma, Xu
Yuan, Junkun
Lin, Lanfen
Tong, Ruofeng
Author_xml – sequence: 1
  givenname: Ziwei
  orcidid: 0000-0003-0171-5158
  surname: Niu
  fullname: Niu, Ziwei
  email: nzw@zju.edu.cn
  organization: College of Computer Science and Technology, Zhejiang University, Hangzhou, China
– sequence: 2
  givenname: Junkun
  orcidid: 0000-0003-0012-7397
  surname: Yuan
  fullname: Yuan, Junkun
  email: yuanjk@zju.edu.cn
  organization: College of Computer Science and Technology, Zhejiang University, Hangzhou, China
– sequence: 3
  givenname: Xu
  orcidid: 0000-0003-2864-4708
  surname: Ma
  fullname: Ma, Xu
  email: maxu@zju.edu.cn
  organization: College of Computer Science and Technology, Zhejiang University, Hangzhou, China
– sequence: 4
  givenname: Yingying
  orcidid: 0000-0003-1217-0448
  surname: Xu
  fullname: Xu, Yingying
  email: cs_ying@zju.edu.cn
  organization: Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
– sequence: 5
  givenname: Jing
  orcidid: 0000-0002-9031-6433
  surname: Liu
  fullname: Liu, Jing
  email: liujinglj@zhejianglab.edu.cn
  organization: Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China
– sequence: 6
  givenname: Yen-Wei
  orcidid: 0000-0002-5952-0188
  surname: Chen
  fullname: Chen, Yen-Wei
  email: chen@is.ritsumei.ac.jp
  organization: College of Information Science and Engineering, Ritsumeikan University, Kyoto, Japan
– sequence: 7
  givenname: Ruofeng
  orcidid: 0000-0002-8167-5354
  surname: Tong
  fullname: Tong, Ruofeng
  email: trf@zju.edu.cn
  organization: College of Computer Science and Technology, Zhejiang University, Hangzhou, China
– sequence: 8
  givenname: Lanfen
  orcidid: 0000-0003-4098-588X
  surname: Lin
  fullname: Lin, Lanfen
  email: llf@zju.edu.cn
  organization: College of Computer Science and Technology, Zhejiang University, Hangzhou, China
BackLink https://cir.nii.ac.jp/crid/1870302167810357504$$DView record in CiNii
BookMark eNp9kDFPwzAQRi1UJNrCzsCQgTXlzo7jeIQWSkUrJFTmyCTnyih1KicCwa8nbTogBpa7G9473X0jNvC1J8YuESaIoG_Wq9WEAxcTwVMhE33ChqgTjAGUGnSz5BBrjnDGRk3zDoCJBDVk-ZOvPysqNxTNXNO6qjKtq318Zxoqo1m9Nc7HC_9hgjO-jV5oF6gh3x6oaEkmeOc3ka3DEY7m5CmYyn0fkHN2ak3V0MWxj9nrw_16-hgvn-eL6e0yLoTUbUxCaakKa0tUVIIx3KQ24xLfSkgtCkUGuMkUaEhAW2tJkdYdjTYrM1RizKDfW4S6aQLZfBfc1oSvHCHfB5R3AeX7gPJjQJ2S_lEK1__VBuOq_8TrXvTOdc6-YneaAI6pyhCEVBKSDrvqMUdEv84BLUAk4gdJwIEG
CODEN ITMUF8
CitedBy_id crossref_primary_10_1145_3595380
crossref_primary_10_1109_TMM_2024_3521671
crossref_primary_10_3390_rs16162877
crossref_primary_10_1109_TMM_2024_3521719
crossref_primary_10_1109_TKDE_2023_3271851
crossref_primary_10_1145_3724398
crossref_primary_10_1007_s00530_024_01613_4
crossref_primary_10_1145_3659953
crossref_primary_10_1109_TIP_2024_3354420
Cites_doi 10.1109/ICCV.2017.609
10.1109/CVPR.2018.00145
10.1609/aaai.v32i1.11682
10.1145/3065386
10.1109/ICASSP40776.2020.9053273
10.1609/aaai.v34i04.5757
10.1109/ICCV48922.2021.00876
10.1109/CVPR42600.2020.00414
10.1109/TCSVT.2022.3152615
10.1109/CVPR46437.2021.00107
10.1109/tpami.2022.3195549
10.1109/TMM.2021.3104379
10.1109/ICCV.2013.208
10.1109/72.788640
10.1007/s10994-009-5152-4
10.1109/CVPR46437.2021.00682
10.1109/CVPR.2018.00566
10.1609/aaai.v34i07.7003
10.1007/978-3-319-49409-8_35
10.1109/CVPR.2009.5206848
10.1109/CVPR46437.2021.01362
10.1109/ICCV.2019.00153
10.1109/CVPR.2017.572
10.1109/CVPR.2017.316
10.5555/2946645.2946704
10.1609/aaai.v32i1.11596
10.1109/IROS.2017.8202133
10.1109/CVPR.2016.90
10.1109/CVPR.2019.00734
10.1007/978-3-030-58536-5_8
10.21437/Interspeech.2017-614
10.1609/aaai.v34i04.5746
10.1109/WACVW54805.2022.00027
10.1007/978-3-030-01267-0_38
10.1109/CVPR42600.2020.01257
10.1007/978-3-030-58517-4_33
10.1109/ICCV.2017.591
10.1109/CVPR52688.2022.00788
10.1007/978-3-030-66415-2_39
10.1109/CVPR.2019.00233
10.1109/CVPR46437.2021.00858
10.1016/j.neucom.2020.09.091
10.1007/978-3-030-58545-7_10
ContentType Journal Article
DBID 97E
RIA
RIE
RYH
AAYXX
CITATION
DOI 10.1109/TMM.2023.3263549
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Xplore
CiNii Complete
CrossRef
DatabaseTitle CrossRef
DatabaseTitleList
Database_xml – sequence: 1
  dbid: RIE
  name: IEEE Xplore
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Engineering
Computer Science
EISSN 1941-0077
EndPage 255
ExternalDocumentID 10_1109_TMM_2023_3263549
10093034
Genre orig-research
GrantInformation_xml – fundername: China Postdoctoral Science Foundation
  grantid: 2020TQ0293
  funderid: 10.13039/501100002858
– fundername: Major Scientific Research Project of Zhejiang Lab
  grantid: 2020ND8AD01
– fundername: Postdoctor Research from Zhejiang Province
  grantid: ZJ2021028
– fundername: Zhejiang Provincial Natural Science Foundation of China
  grantid: LZ22F020012
– fundername: Japanese Ministry for Education, Science, Culture and Sports
  grantid: 20KK0234; 21H03470; 20K21821
– fundername: Major Technological Innovation Project of Hangzhou
  grantid: 2022AIZD0147
GroupedDBID -~X
0R~
29I
4.4
5GY
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACGFO
ACGFS
ACIWK
AENEX
AETIX
AGQYO
AGSQL
AHBIQ
AI.
AIBXA
AKJIK
AKQYR
ALLEH
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
CS3
DU5
EBS
EJD
HZ~
H~9
IFIPE
IFJZH
IPLJI
JAVBF
LAI
M43
O9-
OCL
P2P
PQQKQ
RIA
RIE
RNS
TN5
VH1
ZY4
RYH
AAYXX
CITATION
ID FETCH-LOGICAL-c359t-e37957cffd17ed0aa2a6f8251bd06f137ea02a87090409fffe7e99fd11f8d8173
IEDL.DBID RIE
ISSN 1520-9210
IngestDate Thu Apr 24 22:52:08 EDT 2025
Wed Oct 01 02:36:22 EDT 2025
Thu Jun 26 23:36:42 EDT 2025
Wed Aug 27 02:37:51 EDT 2025
IsPeerReviewed true
IsScholarly true
Language English
Japanese
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c359t-e37957cffd17ed0aa2a6f8251bd06f137ea02a87090409fffe7e99fd11f8d8173
ORCID 0000-0003-2864-4708
0000-0002-9031-6433
0000-0003-4098-588X
0000-0002-8167-5354
0000-0002-5952-0188
0000-0003-0171-5158
0000-0003-0012-7397
0000-0003-1217-0448
PageCount 11
ParticipantIDs crossref_primary_10_1109_TMM_2023_3263549
nii_cinii_1870302167810357504
crossref_citationtrail_10_1109_TMM_2023_3263549
ieee_primary_10093034
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 20240000
2024-01-01
2024-00-00
PublicationDateYYYYMMDD 2024-01-01
PublicationDate_xml – year: 2024
  text: 20240000
PublicationDecade 2020
PublicationTitle IEEE Transactions on Multimedia
PublicationTitleAbbrev TMM
PublicationYear 2024
Publisher IEEE
Institute of Electrical and Electronics Engineers (IEEE)
Publisher_xml – name: IEEE
– name: Institute of Electrical and Electronics Engineers (IEEE)
References ref13
ref57
ref12
Zhou (ref10) 2021
ref15
ref52
ref55
ref54
Li (ref40) 2018
Sagawa (ref59)
ref17
ref16
ref19
Tolstikhin (ref30) 2018
Zhang (ref31) 2018
Zhang (ref65) 2021
ref51
Maaten (ref67) 2008; 9
ref46
ref45
ref48
ref47
ref42
ref41
ref44
ref43
Blanchard (ref21) 2011; 24
ref49
ref8
ref7
ref9
ref4
ref3
ref6
Dou (ref11) 2019; 32
ref5
Goodfellow (ref18) 2014; 27
ref34
Blanchard (ref63) 2021; 22
ref33
ref32
Hinton (ref39) 2015; 14
ref2
ref1
ref38
Muandet (ref22) 2013
Gulrajani (ref53) 2020
Li (ref36) 2023
Long (ref14) 2015
Arjovsky (ref58) 2019
Balaji (ref50) 2018; 31
ref24
ref23
ref26
ref25
Bui (ref35) 2021; 34
ref64
Shankar (ref56) 2018
ref28
ref27
Motiian (ref20) 2017; 30
Li (ref37) 2022
Kingma (ref29) 2013
ref60
ref62
Krueger (ref66) 2021
ref61
References_xml – ident: ref54
  doi: 10.1109/ICCV.2017.609
– volume: 22
  start-page: 46
  issue: 1
  year: 2021
  ident: ref63
  article-title: Domain generalization by marginal transfer learning
  publication-title: J. Mach. Learn. Res.
– ident: ref19
  doi: 10.1109/CVPR.2018.00145
– ident: ref23
  doi: 10.1609/aaai.v32i1.11682
– ident: ref48
  doi: 10.1145/3065386
– ident: ref32
  doi: 10.1109/ICASSP40776.2020.9053273
– ident: ref1
  doi: 10.1609/aaai.v34i04.5757
– ident: ref9
  doi: 10.1109/ICCV48922.2021.00876
– start-page: 97
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2015
  ident: ref14
  article-title: Learning transferable features with deep adaptation networks
– volume: 30
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2017
  ident: ref20
  article-title: Few-shot adversarial domain adaptation
– ident: ref2
  doi: 10.1109/CVPR42600.2020.00414
– year: 2018
  ident: ref40
  article-title: Exploring knowledge distillation of deep neural nets for efficient hardware solutions
– start-page: 10
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2013
  ident: ref22
  article-title: Domain generalization via invariant feature representation
– ident: ref27
  doi: 10.1109/TCSVT.2022.3152615
– year: 2021
  ident: ref65
  article-title: Adaptive risk minimization: A meta-learning approach for tackling group shift
– ident: ref33
  doi: 10.1109/CVPR46437.2021.00107
– ident: ref52
  doi: 10.1109/tpami.2022.3195549
– ident: ref8
  doi: 10.1109/TMM.2021.3104379
– ident: ref44
  doi: 10.1109/ICCV.2013.208
– ident: ref57
  doi: 10.1109/72.788640
– ident: ref13
  doi: 10.1007/s10994-009-5152-4
– volume: 27
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2014
  ident: ref18
  article-title: Generative adversarial nets
– ident: ref28
  doi: 10.1109/CVPR46437.2021.00682
– year: 2022
  ident: ref37
  article-title: Domain generalization using pretrained models without fine-tuning
– ident: ref15
  doi: 10.1109/CVPR.2018.00566
– year: 2019
  ident: ref58
  article-title: Invariant risk minimization
– ident: ref7
  doi: 10.1609/aaai.v34i07.7003
– ident: ref61
  doi: 10.1007/978-3-319-49409-8_35
– ident: ref49
  doi: 10.1109/CVPR.2009.5206848
– start-page: 2021
  volume-title: Proc. Int. Conf. Learn. Representations
  ident: ref59
  article-title: Distributionally robust neural networks for group shifts: On the importance of regularization for worst-case generalization
– ident: ref5
  doi: 10.1109/CVPR46437.2021.01362
– ident: ref51
  doi: 10.1109/ICCV.2019.00153
– ident: ref45
  doi: 10.1109/CVPR.2017.572
– ident: ref16
  doi: 10.1109/CVPR.2017.316
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2023
  ident: ref36
  article-title: Sparse fusion mixture-of-experts are domain generalizable learners
– ident: ref17
  doi: 10.5555/2946645.2946704
– volume: 32
  start-page: 1013
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2019
  ident: ref11
  article-title: Domain generalization via model-agnostic learning of semantic features
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2020
  ident: ref53
  article-title: In search of lost domain generalization
– ident: ref60
  doi: 10.1609/aaai.v32i1.11596
– ident: ref25
  doi: 10.1109/IROS.2017.8202133
– ident: ref47
  doi: 10.1109/CVPR.2016.90
– ident: ref4
  doi: 10.1109/CVPR.2019.00734
– ident: ref46
  doi: 10.1007/978-3-030-58536-5_8
– volume: 14
  start-page: 38
  issue: 7
  year: 2015
  ident: ref39
  article-title: Distilling the knowledge in a neural network
  publication-title: Comput. Sci.
– ident: ref41
  doi: 10.21437/Interspeech.2017-614
– ident: ref42
  doi: 10.1609/aaai.v34i04.5746
– ident: ref3
  doi: 10.1109/WACVW54805.2022.00027
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2021
  ident: ref10
  article-title: Domain generalization with mixstyle
– ident: ref62
  doi: 10.1007/978-3-030-01267-0_38
– volume: 24
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2011
  ident: ref21
  article-title: Generalizing from several related classification tasks to a new unlabeled sample
– ident: ref34
  doi: 10.1109/CVPR42600.2020.01257
– ident: ref26
  doi: 10.1007/978-3-030-58517-4_33
– ident: ref43
  doi: 10.1109/ICCV.2017.591
– year: 2013
  ident: ref29
  article-title: Auto-encoding variational bayes
– ident: ref38
  doi: 10.1109/CVPR52688.2022.00788
– volume: 9
  start-page: 2579
  issue: 11
  year: 2008
  ident: ref67
  article-title: Visualizing data using T-SNE
  publication-title: J. Mach. Learn. Res.
– volume: 34
  start-page: 21189
  year: 2021
  ident: ref35
  article-title: Exploiting domain-specific features to enhance domain generalization
  publication-title: Adv. Neural Inf. Process. Syst.
– ident: ref55
  doi: 10.1007/978-3-030-66415-2_39
– start-page: 5815
  volume-title: Proc. Int. Conf. Mach. Learn.
  year: 2021
  ident: ref66
  article-title: Out-of-distribution generalization via risk extrapolation (REX)
– ident: ref6
  doi: 10.1109/CVPR.2019.00233
– ident: ref64
  doi: 10.1109/CVPR46437.2021.00858
– start-page: 1
  volume-title: Proc. 6th Int. Conf. Learn. Representations
  year: 2018
  ident: ref30
  article-title: Wasserstein auto-encoders
– ident: ref24
  doi: 10.1016/j.neucom.2020.09.091
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2018
  ident: ref31
  article-title: mixup: Beyond empirical risk minimization
– volume: 31
  start-page: 1
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  year: 2018
  ident: ref50
  article-title: Metareg: Towards domain generalization using meta-regularization
– ident: ref12
  doi: 10.1007/978-3-030-58545-7_10
– start-page: 1
  volume-title: Proc. Int. Conf. Learn. Representations
  year: 2018
  ident: ref56
  article-title: Generalizing across domains via cross-gradient training
SSID ssj0014507
ssib009367267
ssib004837832
ssib017384912
ssib000212134
ssib053393536
Score 2.514361
Snippet Domain generalization (DG) aims to generalize the knowledge learned from multiple source domains to unseen target domains. Existing DG techniques can be...
SourceID crossref
nii
ieee
SourceType Enrichment Source
Index Database
Publisher
StartPage 245
SubjectTerms Adaptation models
Computational modeling
Data models
Domain generalization
domain invariant representation
Feature extraction
knowledge distillation
Predictive models
Representation learning
Training
Title Knowledge Distillation-Based Domain-Invariant Representation Learning for Domain Generalization
URI https://ieeexplore.ieee.org/document/10093034
https://cir.nii.ac.jp/crid/1870302167810357504
Volume 26
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore
  customDbUrl:
  eissn: 1941-0077
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0014507
  issn: 1520-9210
  databaseCode: RIE
  dateStart: 19990101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07T-QwEB4BFRS8DsQeD7mgoXCw13EclzzFQ0txAokucvxAKyCL0O4V9-tvnDhohQSiiVKME0ufxzNjz3wDcMikNVaxnHq01jT3wtHa8ZoG602hy6EtZaxGHt0VVw_5zaN8TMXqbS2M975NPvNZfG3v8t3EzuJRGWo4xt9M5IuwqMqiK9b6uDLIZVsbjfaIUY2BTH8nyfTx_WiUxTbhmYjUK5E2c84GtU1V0LI04_GcZblcg7t-Tl1CyXM2m9aZ_feJrvHHk16H1eRjkpNuUWzAgm82Ya3v30CSOm_CyhwZ4S-obvvjNXIeFf-ly5Kjp2jnHDmfvJpxQ6-bvxhcIxrkT5tCmyqXGpJ4Wp8IOsFJmCRK61TpuQUPlxf3Z1c0tV-gVkg9RfSUlsqG4LjyjhkzNEWIla61Y0XgQnnDhgb1XeNGoEMIXnmtUZqH0pVciW1YaiaN3wHinBZSoCWM3ovWRnMhLdOlV5ZjWOwHcNwDUtnETR5bZLxUbYzCdIUQVhHCKkE4gKOPEW8dL8c3slsRljm5DpEB7CPo-MP45GXc-4YcjTdnQkbW-99fjNuFZfx83h3E7MHS9H3m99E1mdYH7ZL8D5_T3O8
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LTxsxEB4BPQCHUh5VU0rxgUsPXuzYzq6PtBSFR3JAQeK28vqBotINQgkHfj3jXS-KKhVxWe1hLFv6PJ4Ze-YbgCOmrLE5k9SjtabSC0crxysarDcDXfRtoWI18mg8GN7Ii1t1m4rVm1oY732TfOaz-Nu85buZXcSrMtRwjL-ZkKvwQUkpVVuu9fpoIFVTHY0WiVGNoUz3Ksn08WQ0ymKj8ExE8pVInLlkhZq2Kmhb6ul0ybacbcG4W1WbUvInW8yrzD7_Q9j47mV_go_JyyQn7bbYhhVf78BW18GBJIXegc0lOsJdKC-7CzZyGlX_vs2Toz_R0jlyOvtrpjU9r58wvEY8yHWTRJtql2qSmFrvCLrBSZgkUutU67kHN2e_J7-GNDVgoFYoPUf8cq1yG4LjuXfMmL4ZhFjrWjk2CFzk3rC-QY3XeBToEILPvdYozUPhCp6Lz7BWz2r_BYhzWiiBtjD6L1obzYWyTBc-txwDY9-D4w6Q0iZ28tgk475sohSmS4SwjBCWCcIe_Hgd8dAyc7whuxdhWZJrEenBAYKOE8YvL-Lp1-dovjkTKvLef_3PuENYH05GV-XV-fhyHzZwKtley3yDtfnjwh-gozKvvjfb8wVdv-A8
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Knowledge+Distillation-Based+Domain-Invariant+Representation+Learning+for+Domain+Generalization&rft.jtitle=IEEE+transactions+on+multimedia&rft.au=Niu%2C+Ziwei&rft.au=Yuan%2C+Junkun&rft.au=Ma%2C+Xu&rft.au=Xu%2C+Yingying&rft.date=2024&rft.pub=IEEE&rft.issn=1520-9210&rft.volume=26&rft.spage=245&rft.epage=255&rft_id=info:doi/10.1109%2FTMM.2023.3263549&rft.externalDocID=10093034
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1520-9210&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1520-9210&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1520-9210&client=summon