Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks

In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to By...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 36; no. 5; pp. 7955 - 7969
Main Authors Zhou, Guanqiang, Xu, Ping, Wang, Yue, Tian, Zhi
Format Journal Article
LanguageEnglish
Published United States IEEE 01.05.2025
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2024.3436149

Cover

Abstract In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is <inline-formula> <tex-math notation="LaTeX">(1/3) </tex-math></inline-formula> or higher, instead of <inline-formula> <tex-math notation="LaTeX">(1/2) </tex-math></inline-formula>, which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
AbstractList In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is <inline-formula> <tex-math notation="LaTeX">(1/3) </tex-math></inline-formula> or higher, instead of <inline-formula> <tex-math notation="LaTeX">(1/2) </tex-math></inline-formula>, which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is [Formula: see text] or higher, instead of [Formula: see text] , which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is [Formula: see text] or higher, instead of [Formula: see text] , which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and test data, the trained model could exhibit poor out-of-sample performance. On the other hand, a portion of working nodes might be subject to Byzantine attacks, which could invalidate the learning result. In this article, we propose a new research direction that jointly considers distributional shifts and Byzantine attacks. We illuminate the major challenges in addressing these two issues simultaneously. Accordingly, we design a new algorithm that equips distributed learning with both distributional robustness and Byzantine robustness. Our algorithm is built on recent advances in distributionally robust optimization (DRO) as well as norm-based screening (NBS), a robust aggregation scheme against Byzantine attacks. We provide convergence proofs in three cases of the learning model being nonconvex, convex, and strongly convex for the proposed algorithm, shedding light on its convergence behaviors and endurability against Byzantine attacks. In particular, we deduce that any algorithm employing NBS (including ours) cannot converge when the percentage of Byzantine nodes is $(1/3)$ or higher, instead of $(1/2)$ , which is the common belief in current literature. The experimental results verify our theoretical findings (on the breakpoint of NBS and others) and also demonstrate the effectiveness of our algorithm against both robustness issues, justifying our choice of NBS over other widely used robust aggregation schemes. To the best of our knowledge, this is the first work to address distributional shifts and Byzantine attacks simultaneously.
Author Tian, Zhi
Wang, Yue
Xu, Ping
Zhou, Guanqiang
Author_xml – sequence: 1
  givenname: Guanqiang
  orcidid: 0000-0001-9072-3349
  surname: Zhou
  fullname: Zhou, Guanqiang
  email: gzhou4@gmu.edu
  organization: Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA
– sequence: 2
  givenname: Ping
  orcidid: 0000-0003-4810-7133
  surname: Xu
  fullname: Xu, Ping
  organization: Department of Electrical and Computer Engineering, University of Texas Rio Grande Valley, Edinburg, TX, USA
– sequence: 3
  givenname: Yue
  orcidid: 0000-0002-3596-2280
  surname: Wang
  fullname: Wang, Yue
  organization: Department of Computer Science, Georgia State University, Atlanta, GA, USA
– sequence: 4
  givenname: Zhi
  orcidid: 0000-0002-2738-6826
  surname: Tian
  fullname: Tian, Zhi
  organization: Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA
BackLink https://www.ncbi.nlm.nih.gov/pubmed/39178077$$D View this record in MEDLINE/PubMed
BookMark eNpNkE1PAjEQhhuDEUT-gDFmj14W-7Xd9gj4mWw0EUy8Nd1uF6rQxW33gL_eRRCdy0wyzzuZPKeg4ypnADhHcIgQFNezp6dsOsQQ0yGhhCEqjkAPI4ZjTDjvHOb0rQsG3r_DthhMGBUnoEsESjlM0x6YvVR540N0Y32obd4EU0SZUbWzbh6N5sq6djmuwuKPsJVTy2i6sGXwkXJFNN58KResM9EoBKU__Bk4LtXSm8G-98Hr3e1s8hBnz_ePk1EWa5wmIRaGIM5RrpUoiwKxlG7fwjQXhJkSFlonRBUpF7nRJdaaa05KlivVvk4F5qQPrnZ313X12Rgf5Mp6bZZL5UzVeEmgYEkiIMYterlHm3xlCrmu7UrVG_lrogXwDtB15X1tygOCoNwalz_G5da43BtvQxe7kDXG_AswmjAOyTeAbnyw
CODEN ITNNAL
Cites_doi 10.1109/TNNLS.2022.3216981
10.1109/MSP.2020.2973345
10.14722/ndss.2021.24434
10.1109/cdc40024.2019.9029832
10.1609/aaai.v33i01.33011544
10.1007/s10107-017-1172-1
10.1561/2200000083
10.1007/s10107-017-1143-6
10.1109/ALLERTON.2018.8636017
10.1109/TSP.2019.2946020
10.1007/s10107-013-0681-9
10.1109/JSAIT.2021.3105076
10.1016/S0933-3657(01)00077-X
10.1109/ISIT44484.2020.9174391
10.1145/3376930.3376983
10.1214/20-AOS2004
10.1287/opre.1090.0741
10.1137/080734510
10.48550/ARXIV.1706.06083
10.1145/3154503
10.1109/TNSE.2022.3169117
10.1515/9781400831050
10.7551/mitpress/9780262170055.001.0001
10.1109/ITA50056.2020.9245017
10.1109/TNNLS.2022.3169347
10.1145/357172.357176
10.1287/opre.1090.0795
10.1109/COMST.2018.2888904
ContentType Journal Article
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7X8
DOI 10.1109/TNNLS.2024.3436149
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005–Present
IEEE All-Society Periodicals Package (ASPP) 1998–Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
MEDLINE - Academic
DatabaseTitle CrossRef
PubMed
MEDLINE - Academic
DatabaseTitleList
MEDLINE - Academic
PubMed
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Xplore Digital Library (LUT)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 7969
ExternalDocumentID 39178077
10_1109_TNNLS_2024_3436149
10645680
Genre orig-research
Journal Article
GrantInformation_xml – fundername: National Science Foundation
  grantid: 1939553; 2003211; 2128596; 2231209; 2413622
  funderid: 10.13039/100000001
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
NPM
RIG
7X8
ID FETCH-LOGICAL-c275t-9e31881bca9fdd1674917824b936ef0dcc53ad789becf2cc8c83f6baa80749283
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Thu Oct 02 12:53:37 EDT 2025
Tue May 06 01:31:49 EDT 2025
Wed Oct 01 06:33:46 EDT 2025
Wed Aug 27 01:53:38 EDT 2025
IsPeerReviewed false
IsScholarly true
Issue 5
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c275t-9e31881bca9fdd1674917824b936ef0dcc53ad789becf2cc8c83f6baa80749283
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
ORCID 0000-0003-4810-7133
0000-0002-3596-2280
0000-0001-9072-3349
0000-0002-2738-6826
PMID 39178077
PQID 3096559022
PQPubID 23479
PageCount 15
ParticipantIDs ieee_primary_10645680
proquest_miscellaneous_3096559022
pubmed_primary_39178077
crossref_primary_10_1109_TNNLS_2024_3436149
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2025-05-01
PublicationDateYYYYMMDD 2025-05-01
PublicationDate_xml – month: 05
  year: 2025
  text: 2025-05-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2025
Publisher IEEE
Publisher_xml – name: IEEE
References ref13
Chen (ref32)
ref12
ref15
ref14
ref52
ref10
Lyu (ref4) 2024; 35
ref16
ref18
ref50
ref46
ref45
ref47
ref41
ref43
ref8
Xie (ref35)
Li (ref49)
ref9
ref3
ref6
ref5
Blanchard (ref39)
Shafieezadeh-Abadeh (ref19)
Regatti (ref38) 2020
Karimireddy (ref30)
Xu (ref51) 2020
ref34
Ghosh (ref48)
ref37
ref36
Fang (ref55)
Baruch (ref54)
Li (ref29) 2023
ref2
Yin (ref42)
Shen (ref24)
Sadeghi (ref23) 2020
Bernstein (ref44)
Hopkins (ref56) 1999
Namkoong (ref17)
Rajput (ref33)
Liu (ref7)
McMahan (ref1)
ref20
ref21
Sinha (ref22)
Mhamdi (ref40)
ref28
ref27
Deng (ref26)
Xie (ref53)
Tandon (ref31)
Mohri (ref25)
Goodfellow (ref11)
References_xml – start-page: 261
  volume-title: Proc. Uncertainty Artif. Intell.
  ident: ref53
  article-title: Fall of empires: Breaking Byzantine-tolerant SGD by inner product manipulation
– start-page: 4615
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref25
  article-title: Agnostic federated learning
– volume: 35
  start-page: 8726
  issue: 7
  year: 2024
  ident: ref4
  article-title: Privacy and robustness in federated learning: Attacks and defenses
  publication-title: IEEE Trans. Neural Netw. Learn. Syst.
  doi: 10.1109/TNNLS.2022.3216981
– ident: ref28
  doi: 10.1109/MSP.2020.2973345
– start-page: 3521
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref40
  article-title: The hidden vulnerability of distributed learning in Byzantium
– ident: ref37
  doi: 10.14722/ndss.2021.24434
– ident: ref21
  doi: 10.1109/cdc40024.2019.9029832
– start-page: 2208
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref17
  article-title: Stochastic gradient methods for distributionally robust optimization with f-divergences
– ident: ref52
  doi: 10.1609/aaai.v33i01.33011544
– start-page: 903
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref32
  article-title: DRACO: Byzantine resilient distributed training via redundant gradients
– start-page: 10320
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref33
  article-title: DETOX: A redundancy-based framework for faster and more robust gradient aggregation
– ident: ref20
  doi: 10.1007/s10107-017-1172-1
– year: 2020
  ident: ref23
  article-title: Learning while respecting privacy and robustness to distributional uncertainties and adversarial data
  publication-title: arXiv:2007.03724
– ident: ref2
  doi: 10.1561/2200000083
– start-page: 3368
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref31
  article-title: Gradient coding: Avoiding stragglers in distributed learning
– ident: ref13
  doi: 10.1007/s10107-017-1143-6
– start-page: 8632
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref54
  article-title: A little is enough: Circumventing defenses for distributed learning
– start-page: 119
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref39
  article-title: Machine learning with adversaries: Byzantine tolerant gradient descent
– volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref30
  article-title: Byzantine-robust learning on heterogeneous datasets via bucketing
– ident: ref34
  doi: 10.1109/ALLERTON.2018.8636017
– year: 2020
  ident: ref51
  article-title: A reputation mechanism is all you need: Collaborative fairness and adversarial robustness in federated learning
  publication-title: arXiv:2011.10464
– ident: ref36
  doi: 10.1109/TSP.2019.2946020
– ident: ref14
  doi: 10.1007/s10107-013-0681-9
– ident: ref46
  doi: 10.1109/JSAIT.2021.3105076
– ident: ref6
  doi: 10.1016/S0933-3657(01)00077-X
– ident: ref47
  doi: 10.1109/ISIT44484.2020.9174391
– ident: ref43
  doi: 10.1145/3376930.3376983
– ident: ref18
  doi: 10.1214/20-AOS2004
– start-page: 1273
  volume-title: Proc. Int. Conf. Artif. Intell. Statist.
  ident: ref1
  article-title: Communication-efficient learning of deep networks from decentralized data
– ident: ref15
  doi: 10.1287/opre.1090.0741
– volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref11
  article-title: Explaining and harnessing adversarial examples
– year: 2020
  ident: ref38
  article-title: ByGARS: Byzantine SGD with arbitrary number of attackers
  publication-title: arXiv:2006.13421
– start-page: 6357
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref49
  article-title: Ditto: Fair and robust federated learning through personalization
– ident: ref10
  doi: 10.1137/080734510
– start-page: 1
  year: 2023
  ident: ref29
  article-title: An experimental study of Byzantine-robust aggregation schemes in federated learning
  publication-title: TechRxiv Preprint
– volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref44
  article-title: signSGD with majority vote is communication efficient and fault tolerant
– ident: ref12
  doi: 10.48550/ARXIV.1706.06083
– start-page: 1576
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref19
  article-title: Distributionally robust logistic regression
– volume-title: Proc. NeurIPS Workshop Scalability, Privacy, Secur. Federated Learn. (SpicyFL)
  ident: ref24
  article-title: Learning to attack distributionally robust federated learning
– ident: ref41
  doi: 10.1145/3154503
– volume-title: UCI Machine Learning Repository
  year: 1999
  ident: ref56
– ident: ref50
  doi: 10.1109/TNSE.2022.3169117
– ident: ref9
  doi: 10.1515/9781400831050
– start-page: 6893
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref35
  article-title: Zeno: Distributed stochastic gradient descent with suspicion-based fault-tolerance
– start-page: 37
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref7
  article-title: Robust classification under sample selection bias
– ident: ref8
  doi: 10.7551/mitpress/9780262170055.001.0001
– ident: ref45
  doi: 10.1109/ITA50056.2020.9245017
– ident: ref3
  doi: 10.1109/TNNLS.2022.3169347
– ident: ref27
  doi: 10.1145/357172.357176
– ident: ref16
  doi: 10.1287/opre.1090.0795
– ident: ref5
  doi: 10.1109/COMST.2018.2888904
– start-page: 15111
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref26
  article-title: Distributionally robust federated averaging
– start-page: 5650
  volume-title: Proc. Int. Conf. Mach. Learn.
  ident: ref42
  article-title: Byzantine-robust distributed learning: Towards optimal statistical rates
– start-page: 18028
  volume-title: Proc. Adv. Neural Inf. Process. Syst.
  ident: ref48
  article-title: Distributed Newton can communicate less and resist Byzantine workers
– volume-title: Proc. Int. Conf. Learn. Represent.
  ident: ref22
  article-title: Certifying some distributional robustness with principled adversarial training
– start-page: 1605
  volume-title: Proc. USENIX Secur. Symp.
  ident: ref55
  article-title: Local model poisoning attacks to Byzantine-robust federated learning
SSID ssj0000605649
Score 2.4773586
Snippet In distributed learning systems, robustness threat may arise from two major sources. On the one hand, due to distributional shifts between training data and...
SourceID proquest
pubmed
crossref
ieee
SourceType Aggregation Database
Index Database
Publisher
StartPage 7955
SubjectTerms Byzantine attacks
Computational modeling
Computer aided instruction
Convergence
Distance learning
distributed learning
distributional shifts
NIST
norm-based screening (NBS)
Robustness
Servers
Wasserstein distance
Title Robust Distributed Learning Against Both Distributional Shifts and Byzantine Attacks
URI https://ieeexplore.ieee.org/document/10645680
https://www.ncbi.nlm.nih.gov/pubmed/39178077
https://www.proquest.com/docview/3096559022
Volume 36
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Xplore Digital Library (LUT)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-237X
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1LT9wwELbonnopj9J2eclIvaEswc76cdzlIVTBHsoi7S3yk6JK2YpNDvDrmXESaJGQesvBjhx_Y803E883hHwHyh6ZczqDCMVnBTiATDsWMm-VlnkU3KTaquuZuLwtfizGi65YPdXChBDS5bMwwsf0L98vXYOpMjjhAvy9ggj9g1SiLdZ6SajkQMxForvsRLCMcbnoi2RyfTyfza5uIBxkxYgXHJaEcqEcYhWVS_mPT0pNVt7nm8nvXKyTWb_i9rrJ71FT25F7eiPm-N-ftEE-dQyUTlqT2SRrodoi6313B9od9s9k_nNpm1VNz1BaF7tiBU87NdY7Orkz90As6RSAfh2R0or05td9rFfUVJ5OH58MtqIIdFLXWM2_TW4vzuenl1nXgyFzTI7rTGOOFKitMzp6jyULuGessJqLEHPv3JgbL5UGW0DQlVM8CmsMiuxo4C5fyKBaVuEboVpFaYEiWK5E4aS1kUXvNEwfyxMf_ZAc9SiUf1qpjTKFKLkuE3wlwld28A3JNu7mXyPbjRySwx65Ek4K_v4wVVg2q5Kj0A2q1bAh-dpC-jK7t4Sdd966Sz4ybPybbjrukUH90IR9YCO1PUhW-Ay4fdmo
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6hcoAL5VFgeRqJG8qS2k4cH7dAtcA2B7qV9hb5WSqkLGKTA_31zDhJeUiVuOVgR46_seabiecbgNdI2SN3TmcYofhMogPItOMh87bSKo-lMKm26qQul2fy06bYjMXqqRYmhJAun4U5PaZ_-X7rekqV4Qkv0d9XGKHfLKSUxVCudZVSyZGal4nw8sOSZ1yozVQmk-u367penWJAyOVcSIGLIsFQgdFKlSv1l1dKbVauZ5zJ8xzvQz2tebhw8m3ed3buLv-Rc_zvj7oLd0YOyhaD0dyDG6G9D_tTfwc2HvcHsP6ytf2uY-9JXJf6YgXPRj3Wc7Y4NxdILdkRQv17REosstOvF7HbMdN6dvTz0lAzisAWXUf1_Adwdvxh_W6ZjV0YMsdV0WWasqRIbp3R0XsqWqA949JqUYaYe-cKYbyqNFoDwV65SsTSGkMyOxrZy0PYa7dteAxMV1FZJAlWVKV0ytrIo3capxfq0Ec_gzcTCs33QWyjSUFKrpsEX0PwNSN8Mzig3fxj5LCRM3g1IdfgWaEfIKYN237XCJK6Ib0aPoNHA6RXsydLeHLNW1_CreX6ZNWsPtafn8JtTm2A073HZ7DX_ejDc-QmnX2RLPIXSDbc9Q
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Robust+Distributed+Learning+Against+Both+Distributional+Shifts+and+Byzantine+Attacks&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhou%2C+Guanqiang&rft.au=Xu%2C+Ping&rft.au=Wang%2C+Yue&rft.au=Tian%2C+Zhi&rft.date=2025-05-01&rft.eissn=2162-2388&rft.volume=36&rft.issue=5&rft.spage=7955&rft_id=info:doi/10.1109%2FTNNLS.2024.3436149&rft_id=info%3Apmid%2F39178077&rft.externalDocID=39178077
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon