Variational Distillation for Multi-View Learning
Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-e...
Saved in:
| Published in | IEEE transactions on pattern analysis and machine intelligence Vol. 46; no. 7; pp. 4551 - 4566 |
|---|---|
| Main Authors | , , , , , , , , |
| Format | Journal Article |
| Language | English |
| Published |
United States
IEEE
01.07.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 0162-8828 1939-3539 2160-9292 1939-3539 |
| DOI | 10.1109/TPAMI.2023.3343717 |
Cover
| Abstract | Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV<inline-formula><tex-math notation="LaTeX">^{2}</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="zhang-ieq1-3343717.gif"/> </inline-formula>D), tackling the above limitations for generalized multi-view learning. Uniquely, MV<inline-formula><tex-math notation="LaTeX">^{2}</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="zhang-ieq2-3343717.gif"/> </inline-formula>D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV<inline-formula><tex-math notation="LaTeX">^{2}</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="zhang-ieq3-3343717.gif"/> </inline-formula>D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle. |
|---|---|
| AbstractList | Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV
D), tackling the above limitations for generalized multi-view learning. Uniquely, MV
D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV
D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle. Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV 2D), tackling the above limitations for generalized multi-view learning. Uniquely, MV 2D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV 2D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle.Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV 2D), tackling the above limitations for generalized multi-view learning. Uniquely, MV 2D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV 2D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle. Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV[Formula Omitted]D), tackling the above limitations for generalized multi-view learning. Uniquely, MV[Formula Omitted]D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV[Formula Omitted]D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle. Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each viewpoint. This highlights the necessity to capture their distinct roles to achieve view-invariance and predictive representations but remains under-explored due to the technical intractability of modeling and organizing innumerable mutual information (MI) terms. Recent studies show that sufficiency and consistency play such key roles in multi-view representation learning, and could be preserved via a variational distillation framework. But when it generalizes to arbitrary viewpoints, such strategy fails as the mutual information terms of consistency become complicated. This paper presents Multi-View Variational Distillation (MV<inline-formula><tex-math notation="LaTeX">^{2}</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="zhang-ieq1-3343717.gif"/> </inline-formula>D), tackling the above limitations for generalized multi-view learning. Uniquely, MV<inline-formula><tex-math notation="LaTeX">^{2}</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="zhang-ieq2-3343717.gif"/> </inline-formula>D can recognize useful consistent information and prioritize diverse components by their generalization ability. This guides an analytical and scalable solution to achieving both sufficiency and consistency. Additionally, by rigorously reformulating the IB objective, MV<inline-formula><tex-math notation="LaTeX">^{2}</tex-math> <mml:math><mml:msup><mml:mrow/><mml:mn>2</mml:mn></mml:msup></mml:math><inline-graphic xlink:href="zhang-ieq3-3343717.gif"/> </inline-formula>D tackles the difficulties in MI optimization and fully realizes the theoretical advantages of the information bottleneck principle. We extensively evaluate our model on diverse tasks to verify its effectiveness, where the considerable gains provide key insights into achieving generalized multi-view representations under a rigorous information-theoretic principle. |
| Author | Zhang, Zhizhong Wu, Zongze Wang, Cong Ma, Lizhuang Tao, Dacheng Zhang, Wensheng Xie, Yuan Qu, Yanyun Tian, Xudong |
| Author_xml | – sequence: 1 givenname: Xudong orcidid: 0000-0002-3394-4986 surname: Tian fullname: Tian, Xudong email: 51194501066@cs.ecnu.edu.cn organization: School of Computer Science and Technology, East China Normal University, Shanghai, China – sequence: 2 givenname: Zhizhong orcidid: 0000-0001-6905-4478 surname: Zhang fullname: Zhang, Zhizhong email: zzzhang@cs.ecnu.edu.cn organization: School of Computer Science and Technology, East China Normal University, Shanghai, China – sequence: 3 givenname: Cong orcidid: 0000-0002-4539-2525 surname: Wang fullname: Wang, Cong email: wangcong64@huawei.com organization: Distributed and Parallel Software Laboratory, 2012 Labs, Huawei Technologies, Hangzhou, China – sequence: 4 givenname: Wensheng orcidid: 0000-0002-9120-9736 surname: Zhang fullname: Zhang, Wensheng email: wensheng.zhang@ia.ac.cn organization: Institute of Automation, Chinese Academy of Sciences, Beijing, China – sequence: 5 givenname: Yanyun orcidid: 0000-0002-8926-4162 surname: Qu fullname: Qu, Yanyun email: yyqu@xmu.edu.cn organization: School of Information Science and Technology, Xiamen University, Fujian, China – sequence: 6 givenname: Lizhuang orcidid: 0000-0003-1653-4341 surname: Ma fullname: Ma, Lizhuang email: lzma@cs.ecnu.edu.cn organization: School of Computer Science and Techology, East China Normal University, Shanghai, China – sequence: 7 givenname: Zongze orcidid: 0000-0002-0597-1426 surname: Wu fullname: Wu, Zongze email: zzwu@szu.edu.cn organization: College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, China – sequence: 8 givenname: Yuan orcidid: 0000-0001-6945-7437 surname: Xie fullname: Xie, Yuan email: yxie@cs.ecnu.edu.cn organization: School of Computer Science and Technology, East China Normal University, Shanghai, China – sequence: 9 givenname: Dacheng orcidid: 0000-0001-7225-5449 surname: Tao fullname: Tao, Dacheng email: dacheng.tao@gmail.com organization: JD Exploer Academy, China and The University of Sydney, Camperdown, NSW, Australia |
| BackLink | https://www.ncbi.nlm.nih.gov/pubmed/38133979$$D View this record in MEDLINE/PubMed |
| BookMark | eNp9kV1LwzAUhoNM3If-AREZeONN50lO27SXY34NNvRi7jakbSIZXTuTFvHfm30osgshcAg8b3LOc_qkU9WVIuSSwohSSO8Wr-P5dMSA4QgxRE75CekxGkOQspR1SA9ozIIkYUmX9J1bAdAwAjwjXUwoYsrTHoGltEY2pq5kObw3rjFlubsOdW2H87ZsTLA06nM4U9JWpno_J6dalk5dHOqAvD0-LCbPwezlaToZz4IcI9YEWREzpXOZ6kLHNIc4yaUOOaaZ5jKJI019MxSLjIYAWYSaowYO3FeIqIxxQG73725s_dEq14i1cbny3VWqbp1gKUQR8wc9enOErurW-oGcQIhD7mdloaeuD1SbrVUhNtaspf0SPy48wPZAbmvnrNK_CAWxFS52wsVWuDgI96HkKJSbZiewsdKU_0ev9lGjlPrzF3Lmt4Tf2z6LSQ |
| CODEN | ITPIDJ |
| CitedBy_id | crossref_primary_10_1016_j_patcog_2022_109119 |
| Cites_doi | 10.1109/ICCV48922.2021.01183 10.1109/ICCV.2017.575 10.1609/aaai.v34i04.5963 10.1109/CVPR42600.2020.01164 10.1609/aaai.v34i04.6122 10.1109/ICCV48922.2021.00029 10.1109/CVPR46437.2021.00981 10.1109/IROS40897.2019.8967762 10.1007/978-3-030-26980-7_45 10.1109/CVPR46437.2021.01236 10.1109/CVPR46437.2021.00180 10.1109/TPAMI.2021.3054775 10.1109/ita.2018.8503149 10.1109/CVPR46437.2021.00157 10.1109/CVPR46437.2021.00065 10.1109/CVPR.2018.00454 10.1007/978-3-030-58580-8_11 10.1109/TPAMI.2012.64 10.1007/s11263-021-01453-z 10.1109/TPAMI.2013.2296528 10.1609/aaai.v35i4.16419 10.1609/aaai.v35i11.17210 10.1109/TPAMI.2013.50 10.1109/CVPR42600.2020.00962 10.1007/978-3-030-58520-4_14 10.1007/978-3-030-58555-6_3 10.1109/ICCV.2017.324 10.1109/CVPR.2018.00464 10.1109/TKDE.2018.2872063 10.1109/ICCV.2019.00939 10.1109/CVPR.2009.5206594 10.1109/ICCV48922.2021.01609 10.1109/ICCV.2019.00381 10.1109/CVPR42600.2020.01112 10.1186/1471-2105-8-113 10.3390/s17030605 10.1109/CVPR46437.2021.00135 10.1016/j.cviu.2005.09.012 10.1007/978-3-030-58604-1_41 10.1109/ICCV48922.2021.01161 10.1145/1646396.1646452 10.1109/TPAMI.2015.2435740 10.1109/ICCV48922.2021.01597 10.1109/ICCV.2019.00769 10.1609/aaai.v29i1.9598 10.1109/TPAMI.2020.3037734 10.1109/CVPR.2019.00535 10.1609/aaai.v35i13.17358 10.1109/ITW.2015.7133169 10.5555/3524938.3525087 10.1109/CVPR42600.2020.01027 10.1609/aaai.v35i11.17163 |
| ContentType | Journal Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION NPM 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| DOI | 10.1109/TPAMI.2023.3343717 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Electronic Library (IEL) CrossRef PubMed Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitle | CrossRef PubMed Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional MEDLINE - Academic |
| DatabaseTitleList | PubMed MEDLINE - Academic Technology Research Database |
| Database_xml | – sequence: 1 dbid: NPM name: PubMed url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed sourceTypes: Index Database – sequence: 2 dbid: RIE name: IEEE Electronic Library (IEL) url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Engineering Computer Science |
| EISSN | 2160-9292 1939-3539 |
| EndPage | 4566 |
| ExternalDocumentID | 38133979 10_1109_TPAMI_2023_3343717 10372503 |
| Genre | orig-research Journal Article |
| GrantInformation_xml | – fundername: Natural Science Foundation of Shanghai grantid: 23ZR1420400 funderid: 10.13039/100007219 – fundername: National Key Research and Development Program of China grantid: 2021ZD0111000 – fundername: CAAI-Huawei MindSpore Open Fund – fundername: National Natural Science Foundation of China grantid: 62222602; U23A20343; 62176092; 62106075 funderid: 10.13039/501100001809 – fundername: Science and Technology Commission of Shanghai Municipality; Shanghai Science and Technology Commission grantid: 21511100700 funderid: 10.13039/501100003399 – fundername: Natural Science Foundation of Chongqing grantid: CSTB2023NSCQ-JQX0007; CSTB2023NSCQ-MSX0137 funderid: 10.13039/501100005230 – fundername: CCF-Tencent Rhino-Bird Young Faculty Open Research Fund grantid: RAGR20230121 |
| GroupedDBID | --- -DZ -~X .DC 0R~ 29I 4.4 53G 5GY 5VS 6IK 97E 9M8 AAJGR AARMG AASAJ AAWTH ABAZT ABFSI ABQJQ ABVLG ACGFO ACGFS ACIWK ACNCT ADRHT AENEX AETEA AETIX AGQYO AGSQL AHBIQ AI. AIBXA AKJIK AKQYR ALLEH ALMA_UNASSIGNED_HOLDINGS ASUFR ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 E.L EBS EJD F5P FA8 HZ~ H~9 IBMZZ ICLAB IEDLZ IFIPE IFJZH IPLJI JAVBF LAI M43 MS~ O9- OCL P2P PQQKQ RIA RIE RNI RNS RXW RZB TAE TN5 UHB VH1 XJT ~02 AAYXX CITATION AAYOK NPM RIG 7SC 7SP 8FD JQ2 L7M L~C L~D 7X8 |
| ID | FETCH-LOGICAL-c352t-bd62efca9fdf61c068caf4739bf7a865f101413db1400b53f73f0707f73051a63 |
| IEDL.DBID | RIE |
| ISSN | 0162-8828 1939-3539 |
| IngestDate | Sun Sep 28 02:21:49 EDT 2025 Mon Jun 30 04:10:42 EDT 2025 Thu Apr 03 07:00:52 EDT 2025 Wed Oct 01 02:24:13 EDT 2025 Thu Apr 24 23:04:08 EDT 2025 Wed Aug 27 02:06:04 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 7 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c352t-bd62efca9fdf61c068caf4739bf7a865f101413db1400b53f73f0707f73051a63 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
| ORCID | 0000-0003-1653-4341 0000-0001-6905-4478 0000-0002-8926-4162 0000-0002-3394-4986 0000-0001-6945-7437 0000-0002-9120-9736 0000-0002-0597-1426 0000-0001-7225-5449 0000-0002-4539-2525 |
| PMID | 38133979 |
| PQID | 3064713324 |
| PQPubID | 85458 |
| PageCount | 16 |
| ParticipantIDs | pubmed_primary_38133979 crossref_primary_10_1109_TPAMI_2023_3343717 ieee_primary_10372503 proquest_journals_3064713324 crossref_citationtrail_10_1109_TPAMI_2023_3343717 proquest_miscellaneous_2905525523 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2024-07-01 |
| PublicationDateYYYYMMDD | 2024-07-01 |
| PublicationDate_xml | – month: 07 year: 2024 text: 2024-07-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | United States |
| PublicationPlace_xml | – name: United States – name: New York |
| PublicationTitle | IEEE transactions on pattern analysis and machine intelligence |
| PublicationTitleAbbrev | TPAMI |
| PublicationTitleAlternate | IEEE Trans Pattern Anal Mach Intell |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | ref12 ref56 ref15 ref59 ref14 ref58 ref52 ref11 ref55 Dua (ref38) 2017 Poole (ref6) ref16 Tishby (ref4) ref51 ref50 ref46 ref45 Dorfer (ref53) Schulz (ref19) ref48 ref47 ref42 Schneidman (ref22) ref41 ref44 ref43 ref49 ref3 Mahabadi (ref18) ref34 ref37 ref36 ref31 ref30 ref33 ref32 ref39 Amini (ref57) Belghazi (ref7) Khosla (ref64) Uurtio (ref54) ref71 ref70 ref73 ref72 Peng (ref17) ref24 ref68 ref23 ref67 Piran (ref9) 2020 ref26 ref25 ref69 ref20 ref63 ref66 ref21 ref65 Lee (ref40) Alemi (ref8) ref28 Xu (ref13) 2013 ref27 ref29 Federici (ref10) Wang (ref2) Hwang (ref35) Andrew (ref1) ref60 ref62 ref61 Huang (ref5) |
| References_xml | – ident: ref27 doi: 10.1109/ICCV48922.2021.01183 – ident: ref49 doi: 10.1109/ICCV.2017.575 – start-page: 5171 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref6 article-title: On variational bounds of mutual information – ident: ref30 doi: 10.1609/aaai.v34i04.5963 – year: 2020 ident: ref9 article-title: The dual information bottleneck – ident: ref68 doi: 10.1109/CVPR42600.2020.01164 – ident: ref28 doi: 10.1609/aaai.v34i04.6122 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref18 article-title: Variational information bottleneck for effective low-resource fine-tuning – ident: ref46 doi: 10.1109/ICCV48922.2021.00029 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref10 article-title: Learning robust representations via multi-view information bottleneck – start-page: 22479 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref35 article-title: Variational interaction information maximization for cross-domain disentanglement – ident: ref61 doi: 10.1109/CVPR46437.2021.00981 – ident: ref58 doi: 10.1109/IROS40897.2019.8967762 – ident: ref16 doi: 10.1007/978-3-030-26980-7_45 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref53 article-title: Deep linear discriminant analysis – ident: ref62 doi: 10.1109/CVPR46437.2021.01236 – ident: ref66 doi: 10.1109/CVPR46437.2021.00180 – ident: ref48 doi: 10.1109/TPAMI.2021.3054775 – year: 2017 ident: ref38 article-title: UCI machine learning repository – ident: ref34 doi: 10.1109/ita.2018.8503149 – ident: ref14 doi: 10.1109/CVPR46437.2021.00157 – ident: ref44 doi: 10.1109/CVPR46437.2021.00065 – start-page: 28 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref57 article-title: Learning from multiple partially observed views - An application to multilingual text categorization – ident: ref31 doi: 10.1109/CVPR.2018.00454 – start-page: 368 volume-title: Proc. Allerton Conf. Commun., Control Comput. ident: ref4 article-title: The information bottleneck method – ident: ref36 doi: 10.1007/978-3-030-58580-8_11 – ident: ref26 doi: 10.1109/TPAMI.2012.64 – year: 2013 ident: ref13 article-title: A survey on multi-view learning – ident: ref33 doi: 10.1007/s11263-021-01453-z – volume-title: Proc. Int. Conf. Learn. Representations ident: ref19 article-title: Restricting the flow: Information bottlenecks for attribution – ident: ref29 doi: 10.1109/TPAMI.2013.2296528 – ident: ref67 doi: 10.1609/aaai.v35i4.16419 – ident: ref11 doi: 10.1609/aaai.v35i11.17210 – ident: ref24 doi: 10.1109/TPAMI.2013.50 – ident: ref60 doi: 10.1109/CVPR42600.2020.00962 – start-page: 531 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref7 article-title: Mutual information neural estimation – ident: ref43 doi: 10.1007/978-3-030-58520-4_14 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref17 article-title: Variational discriminator bottleneck: Improving imitation learning, inverse RL, and GANs by constraining information flow – ident: ref70 doi: 10.1007/978-3-030-58555-6_3 – ident: ref72 doi: 10.1109/ICCV.2017.324 – ident: ref73 doi: 10.1109/CVPR.2018.00464 – ident: ref12 doi: 10.1109/TKDE.2018.2872063 – ident: ref69 doi: 10.1109/ICCV.2019.00939 – ident: ref39 doi: 10.1109/CVPR.2009.5206594 – ident: ref45 doi: 10.1109/ICCV48922.2021.01609 – start-page: 1247 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref1 article-title: Deep canonical correlation analysis – ident: ref32 doi: 10.1109/ICCV.2019.00381 – start-page: 1513 volume-title: Proc. Int. Conf. Artif. Intell. Statist. ident: ref40 article-title: A variational information bottleneck approach to multi-omics data integration – ident: ref65 doi: 10.1109/CVPR42600.2020.01112 – ident: ref52 doi: 10.1186/1471-2105-8-113 – ident: ref50 doi: 10.3390/s17030605 – ident: ref71 doi: 10.1109/CVPR46437.2021.00135 – ident: ref37 doi: 10.1016/j.cviu.2005.09.012 – ident: ref59 doi: 10.1007/978-3-030-58604-1_41 – ident: ref47 doi: 10.1109/ICCV48922.2021.01161 – start-page: 6383 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref54 article-title: Large-scale sparse kernel canonical correlation analysis – ident: ref56 doi: 10.1145/1646396.1646452 – ident: ref3 doi: 10.1109/TPAMI.2015.2435740 – ident: ref63 doi: 10.1109/ICCV48922.2021.01597 – ident: ref20 doi: 10.1109/ICCV.2019.00769 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref8 article-title: Deep variational information bottleneck – start-page: 25883 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref5 article-title: Multi-view subspace clustering on topological manifold – ident: ref55 doi: 10.1609/aaai.v29i1.9598 – start-page: 1083 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref2 article-title: On deep multi-view representation learning – ident: ref41 doi: 10.1109/TPAMI.2020.3037734 – ident: ref51 doi: 10.1109/CVPR.2019.00535 – start-page: 353 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref22 article-title: Analyzing neural codes using the information bottleneck method – ident: ref23 doi: 10.1609/aaai.v35i13.17358 – start-page: 18661 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref64 article-title: Supervised contrastive learning – ident: ref15 doi: 10.1109/ITW.2015.7133169 – ident: ref25 doi: 10.5555/3524938.3525087 – ident: ref42 doi: 10.1109/CVPR42600.2020.01027 – ident: ref21 doi: 10.1609/aaai.v35i11.17163 |
| SSID | ssj0014503 |
| Score | 2.5116527 |
| Snippet | Information Bottleneck (IB) provides an information-theoretic principle for multi-view learning by revealing the various components contained in each... |
| SourceID | proquest pubmed crossref ieee |
| SourceType | Aggregation Database Index Database Enrichment Source Publisher |
| StartPage | 4551 |
| SubjectTerms | Consistency Distillation information bottleneck Information theory knowledge distillation Learning Multi-view learning Mutual information Optimization Pattern analysis Predictive models Principles Representation learning Representations Task analysis variational inference Visualization |
| Title | Variational Distillation for Multi-View Learning |
| URI | https://ieeexplore.ieee.org/document/10372503 https://www.ncbi.nlm.nih.gov/pubmed/38133979 https://www.proquest.com/docview/3064713324 https://www.proquest.com/docview/2905525523 |
| Volume | 46 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| journalDatabaseRights | – providerCode: PRVIEE databaseName: IEEE Electronic Library (IEL) customDbUrl: eissn: 2160-9292 dateEnd: 99991231 omitProxy: false ssIdentifier: ssj0014503 issn: 0162-8828 databaseCode: RIE dateStart: 19790101 isFulltext: true titleUrlDefault: https://ieeexplore.ieee.org/ providerName: IEEE |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFH_oDqIHp_OrflHBm7RmTfqR41DHFCYetuGtJG0iomyiK4J_vS9pOlRQPLXQNE3z8vLeL-8L4JQkkmdUkSDLChEwEeE-yLkMtKAF56kSSpoA5-FtMhizm_v43gWr21gYpZR1PlOhubW2_HJWVOao7NzEtKHIpsuwnGZJHay1MBmw2JZBRhUGWRxxRBMhQ_j56K43vA5NofCQUkYRwazCCooqaoxa3wSSrbDyu7JphU6_DbfNcGtfk6ewmsuw-PiRyfHf_7MB60799Hv1etmEJTXtQLsp7eA7Tu_A2pc8hVtAJoio3amhf2l2hefahc5Hlde3MbzB5FG9-y5b68M2jPtXo4tB4EotBAVqYPNAlkmkdCG4LnXSLUiClNMspVzqVGRJrE1J3y4tJeIxImOqU6pNoiC8IleLhO5Aazqbqj3wGQKiUpA0oqboOldZKWNScolir8wy2fWg28x3Xrg85KYcxnNu8QjhuSVXbsiVO3J5cLZ456XOwvFn620z119a1tPswWFD19xx6ltuEJgB6hHz4GTxGHnMGE7EVM2qtzziJI4Re0XYxW69HhadN8to_5ePHsAqjo3VHr6H0Jq_VuoI9Zi5PLbr9xPg-ujY |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3dT9swED8xkDZ4oBvrRvnYMmlvU4Ib20n8iICqZW21h4L6ZtmJjaZV7TRaIfHXc3acik3qxFMixXEcn893P98XwFeSaVFQQ-KiKFXMVIr7oBA6toqWQuRGGe0CnEfjrH_Drqd8GoLVfSyMMcY7n5nE3XpbfrUoV-6o7MzFtKHIpq9ghzPGeB2utTYaMO4LIaMSg0yOSKKJkSHibPLjfDRIXKnwhFJGEcPswmsUVtSZtf4SSb7GymZ104udXgvGzYBrb5NfyWqpk_Lxn1yOL_6jt7AfFNDovF4x72DLzA-g1RR3iAKvH8Des0yF74HcIqYO54bRpdsXZrUTXYRKb-SjeOPbn-YhCvla79pw07uaXPTjUGwhLlEHW8a6ylJjSyVsZbNuSTKknWU5Fdrmqsi4dUV9u7TSiMiI5tTm1LpUQXhFvlYZ_QDb88XcHELEEBJViuQpdWXXhSkqzUklNAq-qih0twPdZr5lGTKRu4IYM-kRCRHSk0s6cslArg58W7_zu87D8d_WbTfXz1rW09yBk4auMvDqvXQYzEH1lHXgy_oxcpkznai5WazuZSoI54i-UuziY70e1p03y-how0c_w5v-ZDSUw8H4-zHs4jhZ7e97AtvLPytzilrNUn_ya_kJPuLsJQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Variational+Distillation+for+Multi-View+Learning&rft.jtitle=IEEE+transactions+on+pattern+analysis+and+machine+intelligence&rft.au=Tian%2C+Xudong&rft.au=Zhang%2C+Zhizhong&rft.au=Wang%2C+Cong&rft.au=Zhang%2C+Wensheng&rft.date=2024-07-01&rft.issn=1939-3539&rft.eissn=1939-3539&rft.volume=46&rft.issue=7&rft.spage=4551&rft_id=info:doi/10.1109%2FTPAMI.2023.3343717&rft.externalDBID=NO_FULL_TEXT |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=0162-8828&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=0162-8828&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=0162-8828&client=summon |