LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning

Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the...

Full description

Saved in:
Bibliographic Details
Published inIEEE transaction on neural networks and learning systems Vol. 32; no. 3; pp. 962 - 974
Main Authors Zhang, Jingjing, Simeone, Osvaldo
Format Journal Article
LanguageEnglish
Published United States IEEE 01.03.2021
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN2162-237X
2162-2388
2162-2388
DOI10.1109/TNNLS.2020.2979762

Cover

Abstract Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding (GC), worker grouping, and adaptive worker selection. This article provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of GC and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named lazily aggregated GC (LAGC) and grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.
AbstractList Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding (GC), worker grouping, and adaptive worker selection. This article provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of GC and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named lazily aggregated GC (LAGC) and grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding (GC), worker grouping, and adaptive worker selection. This article provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of GC and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named lazily aggregated GC (LAGC) and grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.
Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible communication bottlenecks between PS and workers. Solutions have been recently proposed to separately address these impairments based on the ideas of gradient coding (GC), worker grouping, and adaptive worker selection. This article provides a unified analysis of these techniques in terms of wall-clock time, communication, and computation complexity measures. Furthermore, in order to combine the benefits of GC and grouping in terms of robustness to stragglers with the communication and computation load gains of adaptive selection, novel strategies, named lazily aggregated GC (LAGC) and grouped-LAG (G-LAG), are introduced. Analysis and results show that G-LAG provides the best wall-clock time and communication performance while maintaining a low computational cost, for two representative distributions of the computing times of the worker nodes.
Author Zhang, Jingjing
Simeone, Osvaldo
Author_xml – sequence: 1
  givenname: Jingjing
  orcidid: 0000-0003-1498-4912
  surname: Zhang
  fullname: Zhang, Jingjing
  email: jingjing.1.zhang@kcl.ac.uk
  organization: Department of Informatics, King's College London, London, U.K
– sequence: 2
  givenname: Osvaldo
  orcidid: 0000-0001-9898-3209
  surname: Simeone
  fullname: Simeone, Osvaldo
  email: osvaldo.simeone@kcl.ac.uk
  organization: Department of Informatics, King's College London, London, U.K
BackLink https://www.ncbi.nlm.nih.gov/pubmed/32287013$$D View this record in MEDLINE/PubMed
BookMark eNp9ks2L1DAYxoOsuOu6_4CCFLx46ZiPNh_ehnEdhbIedgRvJW3flixtMiYpMv71Zj6cwwjmkATye948ed68RFfWWUDoNcELQrD6sHl4qB4XFFO8oEoowekzdEMJpzllUl6d9-LHNboL4QmnwXHJC_UCXTNKpcCE3SBXLderj1mlf5txly2HwcOgI3TZ2uvOgI3ZynXGDlnvfPYYvR6GEXy-cWnW6VTbLhHTNFvT6micze_73rQH5ScTojfNvC9XgfY21XmFnvd6DHB3Wm_R98_3m9WXvPq2_rpaVnnLVBnzZE9pqjpOC6p6xdObhAQFChdAuqKRtC9wz4RoRSOJEJiSTpUFbxQRJSsFu0XsWHe2W737pcex3nozab-rCa73CdbR2jHU-wTrU4JJ9f6o2nr3c4YQ68mEFsZRW3BzgpnCpGSS8YS-u0Cf3Oyt3pcslOSCYVwk6u2JmpsJurOHvw1IAD0CrXcheOj_sXlo9KVNeSFqTTyknxpkxv9L3xylBgDOd6n0MSRW7A-FpLIt
CODEN ITNNAL
CitedBy_id crossref_primary_10_1109_OJCOMS_2024_3458802
crossref_primary_10_1016_j_dsp_2023_104353
crossref_primary_10_1109_TNNLS_2023_3304453
crossref_primary_10_1109_JIOT_2024_3403178
crossref_primary_10_1007_s11227_022_04466_8
crossref_primary_10_1109_TCOMM_2022_3166902
crossref_primary_10_1109_TCOMM_2024_3377715
crossref_primary_10_3390_s22228727
crossref_primary_10_1109_TNNLS_2020_3041185
crossref_primary_10_1109_COMST_2021_3086014
crossref_primary_10_1109_TSC_2024_3395931
crossref_primary_10_1109_TITS_2022_3149753
crossref_primary_10_1109_JIOT_2022_3182394
crossref_primary_10_1109_TCOMM_2020_2992721
crossref_primary_10_1109_TSP_2024_3452035
crossref_primary_10_1109_TNSE_2022_3185672
crossref_primary_10_1007_s12083_021_01254_8
crossref_primary_10_1109_OJCOMS_2024_3423362
Cites_doi 10.1007/978-3-7908-2604-3_16
10.1137/1.9780898718508
10.1109/ALLERTON.2017.8262882
10.1109/TAC.1986.1104412
10.1109/ITW44776.2019.8989328
10.1109/ISIT.2017.8006963
10.1080/10556788.2016.1278445
10.1016/S0167-7152(97)00066-7
10.1109/TIT.2017.2736066
10.1145/2408776.2408794
10.1007/BF00229304
10.14778/1920841.1920931
10.2307/2285605
10.1109/ISIT.2019.8849690
10.1109/DSW.2019.8755563
10.1017/CBO9781316402276
10.1145/3133956.3133982
10.1109/ISIT.2018.8437467
10.2200/S00193ED1V01Y200905CAC006
10.1145/2640087.2644155
10.1137/1.9780898719062
ContentType Journal Article
Copyright Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
Copyright_xml – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2021
DBID 97E
RIA
RIE
AAYXX
CITATION
NPM
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ADTOC
UNPAY
DOI 10.1109/TNNLS.2020.2979762
DatabaseName IEEE All-Society Periodicals Package (ASPP) 2005-present
IEEE All-Society Periodicals Package (ASPP) 1998-Present
IEEE Electronic Library (IEL)
CrossRef
PubMed
Aluminium Industry Abstracts
Biotechnology Research Abstracts
Calcium & Calcified Tissue Abstracts
Ceramic Abstracts
Chemoreception Abstracts
Computer and Information Systems Abstracts
Corrosion Abstracts
Electronics & Communications Abstracts
Engineered Materials Abstracts
Materials Business File
Mechanical & Transportation Engineering Abstracts
Neurosciences Abstracts
Solid State and Superconductivity Abstracts
METADEX
Technology Research Database
ANTE: Abstracts in New Technology & Engineering
Engineering Research Database
Aerospace Database
Materials Research Database
ProQuest Computer Science Collection
Civil Engineering Abstracts
Advanced Technologies Database with Aerospace
Computer and Information Systems Abstracts – Academic
Computer and Information Systems Abstracts Professional
Biotechnology and BioEngineering Abstracts
MEDLINE - Academic
Unpaywall for CDI: Periodical Content
Unpaywall
DatabaseTitle CrossRef
PubMed
Materials Research Database
Technology Research Database
Computer and Information Systems Abstracts – Academic
Mechanical & Transportation Engineering Abstracts
ProQuest Computer Science Collection
Computer and Information Systems Abstracts
Materials Business File
Aerospace Database
Engineered Materials Abstracts
Biotechnology Research Abstracts
Chemoreception Abstracts
Advanced Technologies Database with Aerospace
ANTE: Abstracts in New Technology & Engineering
Civil Engineering Abstracts
Aluminium Industry Abstracts
Electronics & Communications Abstracts
Ceramic Abstracts
Neurosciences Abstracts
METADEX
Biotechnology and BioEngineering Abstracts
Computer and Information Systems Abstracts Professional
Solid State and Superconductivity Abstracts
Engineering Research Database
Calcium & Calcified Tissue Abstracts
Corrosion Abstracts
MEDLINE - Academic
DatabaseTitleList MEDLINE - Academic

PubMed
Materials Research Database
Database_xml – sequence: 1
  dbid: NPM
  name: PubMed
  url: https://proxy.k.utb.cz/login?url=http://www.ncbi.nlm.nih.gov/entrez/query.fcgi?db=PubMed
  sourceTypes: Index Database
– sequence: 2
  dbid: RIE
  name: IEEE Electronic Library (IEL)
  url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/
  sourceTypes: Publisher
– sequence: 3
  dbid: UNPAY
  name: Unpaywall
  url: https://proxy.k.utb.cz/login?url=https://unpaywall.org/
  sourceTypes: Open Access Repository
DeliveryMethod fulltext_linktorsrc
Discipline Computer Science
EISSN 2162-2388
EndPage 974
ExternalDocumentID oai:kclpure.kcl.ac.uk:publications/8c8ba1cc-1287-4aa4-bb57-9e8c23aa8ff0
32287013
10_1109_TNNLS_2020_2979762
9056809
Genre orig-research
Journal Article
GrantInformation_xml – fundername: European Research Council through the European Union’s Horizon 2020 Research and Innovation Program
  grantid: 725731
  funderid: 10.13039/501100000781
GroupedDBID 0R~
4.4
5VS
6IK
97E
AAJGR
AARMG
AASAJ
AAWTH
ABAZT
ABQJQ
ABVLG
ACIWK
ACPRK
AENEX
AFRAH
AGQYO
AGSQL
AHBIQ
AKJIK
AKQYR
ALMA_UNASSIGNED_HOLDINGS
ATWAV
BEFXN
BFFAM
BGNUA
BKEBE
BPEOZ
EBS
EJD
IFIPE
IPLJI
JAVBF
M43
MS~
O9-
OCL
PQQKQ
RIA
RIE
RNS
AAYXX
CITATION
NPM
RIG
7QF
7QO
7QP
7QQ
7QR
7SC
7SE
7SP
7SR
7TA
7TB
7TK
7U5
8BQ
8FD
F28
FR3
H8D
JG9
JQ2
KR7
L7M
L~C
L~D
P64
7X8
ADTOC
UNPAY
ID FETCH-LOGICAL-c395t-2879a29d62429f9623878e9e904e1d4b82f40f377c7b8177021d9546b91753573
IEDL.DBID RIE
ISSN 2162-237X
2162-2388
IngestDate Thu Aug 28 11:30:08 EDT 2025
Sat Sep 27 20:50:03 EDT 2025
Mon Jun 30 02:38:19 EDT 2025
Thu Apr 03 07:08:44 EDT 2025
Wed Oct 01 00:44:51 EDT 2025
Thu Apr 24 22:57:30 EDT 2025
Wed Aug 27 02:48:55 EDT 2025
IsDoiOpenAccess false
IsOpenAccess true
IsPeerReviewed false
IsScholarly true
Issue 3
Language English
License https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html
https://doi.org/10.15223/policy-029
https://doi.org/10.15223/policy-037
other-oa
LinkModel DirectLink
MergedId FETCHMERGED-LOGICAL-c395t-2879a29d62429f9623878e9e904e1d4b82f40f377c7b8177021d9546b91753573
Notes ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ORCID 0000-0001-9898-3209
0000-0003-1498-4912
OpenAccessLink https://proxy.k.utb.cz/login?url=https://kclpure.kcl.ac.uk/ws/files/126958281/LAGC_Lazily_Aggregated_Gradient_ZHANG_Acc2May2020Epub3Apr2020_GREEN_AAM.pdf
PMID 32287013
PQID 2498673004
PQPubID 85436
PageCount 13
ParticipantIDs proquest_miscellaneous_2390153836
proquest_journals_2498673004
crossref_citationtrail_10_1109_TNNLS_2020_2979762
pubmed_primary_32287013
crossref_primary_10_1109_TNNLS_2020_2979762
ieee_primary_9056809
unpaywall_primary_10_1109_tnnls_2020_2979762
ProviderPackageCode CITATION
AAYXX
PublicationCentury 2000
PublicationDate 2021-03-01
PublicationDateYYYYMMDD 2021-03-01
PublicationDate_xml – month: 03
  year: 2021
  text: 2021-03-01
  day: 01
PublicationDecade 2020
PublicationPlace United States
PublicationPlace_xml – name: United States
– name: Piscataway
PublicationTitle IEEE transaction on neural networks and learning systems
PublicationTitleAbbrev TNNLS
PublicationTitleAlternate IEEE Trans Neural Netw Learn Syst
PublicationYear 2021
Publisher IEEE
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Publisher_xml – name: IEEE
– name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
References ref13
bottou (ref34) 2007
jaggi (ref28) 2014
schmidt (ref44) 2018
ref11
ye (ref12) 2018
ref17
ref16
dutta (ref38) 2018
mallick (ref45) 2018
tandon (ref8) 2017
ref50
bernstein (ref36) 2018
ref46
maity (ref19) 2018
ref48
ref47
yu (ref22) 2017
ref43
ref49
shamir (ref30) 2014
wangni (ref27) 2018
zhang (ref31) 2015
ref7
alistarh (ref35) 2017
ref9
ref4
ref6
ref5
ba (ref41) 2017
ref37
ref33
suresh (ref26) 2017
chen (ref10) 2018
dean (ref1) 2012
zinkevich (ref3) 2010
ref23
martens (ref40) 2015
zhang (ref25) 2017
dutta (ref24) 2016
recht (ref39) 2011
ref20
charles (ref15) 2017
raviv (ref14) 2018
ref21
jahani (ref32) 2018
ref29
wang (ref18) 2019
abadi (ref2) 2015
bottou (ref42) 2016
References_xml – ident: ref33
  doi: 10.1007/978-3-7908-2604-3_16
– ident: ref50
  doi: 10.1137/1.9780898718508
– ident: ref23
  doi: 10.1109/ALLERTON.2017.8262882
– ident: ref37
  doi: 10.1109/TAC.1986.1104412
– start-page: 1223
  year: 2012
  ident: ref1
  article-title: Large scale distributed deep networks
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS)
– start-page: 5606
  year: 2018
  ident: ref12
  article-title: Communication-computation efficient gradient coding
  publication-title: Proc Int Conf Mach Learn (ICML)
– ident: ref17
  doi: 10.1109/ITW44776.2019.8989328
– start-page: 1000
  year: 2014
  ident: ref30
  article-title: Communication-efficient distributed optimization using an approximate Newton-type method
  publication-title: Proc Int Conf Mach Learn
– start-page: 161
  year: 2007
  ident: ref34
  article-title: The tradeoffs of large scale learning
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS) Workshop
– ident: ref21
  doi: 10.1109/ISIT.2017.8006963
– start-page: 362
  year: 2015
  ident: ref31
  article-title: DiSCO: Distributed optimization for self-concordant empirical loss
  publication-title: Proc Int Conf Mach Learn
– start-page: 2100
  year: 2016
  ident: ref24
  article-title: Short-dot: Computing large linear transforms distributedly using coded short dot products
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS)
– ident: ref29
  doi: 10.1080/10556788.2016.1278445
– start-page: 4035
  year: 2017
  ident: ref25
  article-title: ZipML: Training linear models with end-to-end low precision, and a little bit of deep learning
  publication-title: Proc Int Conf Mach Learn
– ident: ref49
  doi: 10.1016/S0167-7152(97)00066-7
– ident: ref20
  doi: 10.1109/TIT.2017.2736066
– ident: ref6
  doi: 10.1145/2408776.2408794
– ident: ref43
  doi: 10.1007/BF00229304
– start-page: 1709
  year: 2018
  ident: ref36
  article-title: SIGNSGD: Compressed optimisation for non-convex problems
  publication-title: Proc 35th Int Conf Mach Learn (ICML)
– year: 2016
  ident: ref42
  article-title: Optimization methods for large-scale machine learning
  publication-title: arXiv 1606 04838
– start-page: 2595
  year: 2010
  ident: ref3
  article-title: Parallelized stochastic gradient descent
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS)
– ident: ref4
  doi: 10.14778/1920841.1920931
– ident: ref46
  doi: 10.2307/2285605
– ident: ref16
  doi: 10.1109/ISIT.2019.8849690
– ident: ref9
  doi: 10.1109/DSW.2019.8755563
– year: 2018
  ident: ref44
  publication-title: Lecture Slides Rates of Convergence
– start-page: 1306
  year: 2018
  ident: ref27
  article-title: Gradient sparsification for communication-efficient distributed optimization
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS) Workshop
– year: 2017
  ident: ref15
  article-title: Approximate gradient coding via sparse random graphs
  publication-title: arXiv 1711 06771
– start-page: 803
  year: 2018
  ident: ref38
  article-title: Slow and stale gradients can win the race: Error-runtime trade-offs in distributed SGD
  publication-title: Proc Int Conf Artif Intell Statist (AISTATS)
– start-page: 3329
  year: 2017
  ident: ref26
  article-title: Distributed mean estimation with limited communication
  publication-title: Proc Int Conf Mach Learn
– year: 2015
  ident: ref2
  publication-title: Tensorflow Large-scale machine learning on heterogeneous distributed systems
– start-page: 3368
  year: 2017
  ident: ref8
  article-title: Gradient coding: Avoiding stragglers in distributed learning
  publication-title: Proc 34th Int Conf Mach Learn
– ident: ref11
  doi: 10.1017/CBO9781316402276
– start-page: 1709
  year: 2017
  ident: ref35
  article-title: QSGD: Communication-efficient SGD via gradient quantization and encoding
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS) Workshop
– year: 2018
  ident: ref19
  article-title: Robust gradient descent via moment encoding with LDPC codes
  publication-title: arXiv 1805 08327
– ident: ref48
  doi: 10.1145/3133956.3133982
– year: 2018
  ident: ref45
  article-title: Rateless codes for near-perfect load balancing in distributed matrix-vector multiplication
  publication-title: arXiv 1804 10331
– start-page: 4302
  year: 2018
  ident: ref14
  article-title: Gradient coding from cyclic MDS codes and expander graphs
  publication-title: Proc Int Conf Mach Learn (ICML)
– start-page: 5050
  year: 2018
  ident: ref10
  article-title: LAG: Lazily aggregated gradient for communication-efficient distributed learning
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS)
– year: 2017
  ident: ref41
  article-title: Distributed second-order optimization using Kronecker-factored approximations
  publication-title: Proc Int Conf Learn Represent (ICLR)
– ident: ref13
  doi: 10.1109/ISIT.2018.8437467
– ident: ref7
  doi: 10.2200/S00193ED1V01Y200905CAC006
– ident: ref5
  doi: 10.1145/2640087.2644155
– year: 2018
  ident: ref32
  article-title: Efficient distributed hessian free algorithm for large-scale empirical risk minimization via accumulating sample strategy
  publication-title: arXiv 1810 11507
– start-page: 693
  year: 2011
  ident: ref39
  article-title: Hogwild: A lock-free approach to parallelizing stochastic gradient descent
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS) Workshop
– year: 2019
  ident: ref18
  article-title: ErasureHead: Distributed gradient descent without delays using approximate gradient coding
  publication-title: arXiv 1901 09671
– start-page: 4403
  year: 2017
  ident: ref22
  article-title: Polynomial codes: An optimal design for high-dimensional coded matrix multiplication
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS)
– ident: ref47
  doi: 10.1137/1.9780898719062
– start-page: 3068
  year: 2014
  ident: ref28
  article-title: Communication-efficient distributed dual coordinate ascent
  publication-title: Proc Adv Neural Inf Process Syst (NeurIPS)
– start-page: 2408
  year: 2015
  ident: ref40
  article-title: Optimizing neural networks with Kronecker-factored approximate curvature
  publication-title: Proc Int Conf Mach Learn (ICML)
SSID ssj0000605649
Score 2.4818282
Snippet Gradient-based distributed learning in parameter server (PS) computing architectures is subject to random delays due to straggling worker nodes and to possible...
SourceID unpaywall
proquest
pubmed
crossref
ieee
SourceType Open Access Repository
Aggregation Database
Index Database
Enrichment Source
Publisher
StartPage 962
SubjectTerms Adaptive selection
Coding
Communication
Computational complexity
Computer applications
Computer architecture
Computing costs
distributed learning
Encoding
gradient descent (GD)
grouping
Learning
Nodes
Redundancy
Robustness
Robustness (mathematics)
Servers
SummonAdditionalLinks – databaseName: Unpaywall
  dbid: UNPAY
  link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwxV3fb9MwED6N7gF42IAxCAxkJN4gbeK4ic1bVLpWqI0QtFLhxUpip0KL0qpLtXX_Cf8tvjTpKCAknnhJIsWJf-Ry953t-w7gNReKsUwwm6aZbzMtPDtxEvNBjOPmOrEWrsJ453HkD6fsw6w7O4DrJhbmIs2X65Vum3MdTHWFy7e5vuy41Be4yON2RuGgJ0fxzbd8I8O58UtxxknJwaraJVXKr8MwGsgwTek43hjv3umbvnnhcoXXcvCp349kGI7bS5XdgUMfl6ZacDiNPoZfMBed61ObesHs9przJt7GEZ2yKHIk-qZOm4rAmHO6Z9OqJC1_wqv34e66WMabqzjPf7Jh58fwven9duvKRXtdJu305hdiyP8xPA_gqAbGJNxK8kM40MUjOG6STpBaB53AAut9R7YVk9uKSVMx6S3QHhODxgly787nuV7Zk4U5GikicaHIXlSM3a9YNPDJ98gijAnAzOtq4tn5Y5ie9ye9oV1njbBTT3RL27iAIqZCYeCLyISBdzzgWmjhMO0qlnCaMSfzgiANEu4GgQE5SnSZnwgkLe0G3im0ikWhnwLRaSxoVzHlxQnTBqu5CVMIEf2Ei4RxC9zmu8u0plTHzB65rFwrR8hJFI0-y2pca1mx4M3umeWWUOSvpU9QnHYlhcGr3BEWnDXiJWudcymNI819TD_ALHi1u220BS4BxYVerE2ZaorL455vwZOtWO7ebVS7Ud6uZ8HbnZz-1sRK-Pea-Ozfij-HexS3BVXb-M6gVa7W-oXBdWXysv4FfwCONkX4
  priority: 102
  providerName: Unpaywall
Title LAGC: Lazily Aggregated Gradient Coding for Straggler-Tolerant and Communication-Efficient Distributed Learning
URI https://ieeexplore.ieee.org/document/9056809
https://www.ncbi.nlm.nih.gov/pubmed/32287013
https://www.proquest.com/docview/2498673004
https://www.proquest.com/docview/2390153836
https://kclpure.kcl.ac.uk/ws/files/126958281/LAGC_Lazily_Aggregated_Gradient_ZHANG_Acc2May2020Epub3Apr2020_GREEN_AAM.pdf
UnpaywallVersion submittedVersion
Volume 32
hasFullText 1
inHoldings 1
isFullTextHit
isPrint
journalDatabaseRights – providerCode: PRVIEE
  databaseName: IEEE Electronic Library (IEL)
  customDbUrl:
  eissn: 2162-2388
  dateEnd: 99991231
  omitProxy: false
  ssIdentifier: ssj0000605649
  issn: 2162-2388
  databaseCode: RIE
  dateStart: 20120101
  isFulltext: true
  titleUrlDefault: https://ieeexplore.ieee.org/
  providerName: IEEE
link http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB6VcgAOFCiPQKmMxI1mm4cTx9xWSx9C2xUSu9JyiuLYWSGipFoSofbXM-M81EKFuESRYjtOZqz5xp75BuB9IjXnheRukBexy40MXeUpFAg6br6XGelryne-WMTnK_55Ha134GjMhTHG2OAzM6Fbe5av67ylrbJjidY6oWy9e0LILldr3E_xEJfHFu0Gfhy4QSjWQ46MJ4-Xi8X8K3qDgTcJpEATTFVsUJdRW_3wlkmyNVbugpuP4EFbXWZXv7KyvGGCTvfgYph8F3nyY9I2apJf_8Hr-L9f9wQe91iUTTvleQo7pnoGe0OdB9Yv-32o59Oz2Uc2z66_l1dsukEXnTbfNDvb2oCxhs1qMoEMATAjutvNpjRbd1njFQXHskqzW4ko7oklrqCen4i4l2pu4XA91-vmOaxOT5azc7cv1ODmoYwaF3-jzAKpKddEFhIRVSISI430uPE1V0lQcK8IhciFSnwhEFdoGfFYSeIJjUT4AnarujKvgJk8k0GkuQ4zxQ3CI19xTagsVolUPHHAH2SV5j2LORXTKFPrzXgytaJOSdRpL2oHPox9LjsOj3-23ifZjC17sThwMKhE2i_znyn6rklMjP_cgXfjY1ygdOqSVaZusY3dVQqTMHbgZadK49iDBjpwNOrWX1NsqopedWOKr--e4ht4GFDEjY2QO4DdZtuatwiZGnVo18oh3F8tvky__QZ0PA76
linkProvider IEEE
linkToHtml http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV1Lb9QwEB5V5VA4tEB5BAoYiRvNNg_nYW6rpe0C2b2wlfYWxbGzQkRJtU2E2l_PjPNQCxXiEkWK7TiZseYbe-YbgA-xUJwXgtteXoQ218K3pSNRIOi4uU6mhaso33mxDOcX_Os6WO_A8ZgLo7U2wWd6QrfmLF_VeUtbZScCrXVM2XoPAvQqoi5ba9xRcRCZhwbvem7o2Z4frYcsGUecrJbL5Dv6g54z8USERpjq2KA2o766_h2jZKqs3Ac4H8FeW11m17-ysrxlhM4OYDFMv4s9-TlpGznJb_5gdvzf73sM-z0aZdNOfZ7Ajq6ewsFQ6YH1C_8Q6mR6PvvEkuzmR3nNpht00mn7TbHzrQkZa9isJiPIEAIzIrzdbEq9tVc1XlF0LKsUu5OKYp8a6grq-Zmoe6nqFg7Xs71unsHF2elqNrf7Ug127ougsfE3iswTirJNRCEQU8VRrIUWDteu4jL2Cu4UfhTlkYzdKEJkoUTAQymIKTSI_OewW9WVfglM55nwAsWVn0muESC5kivCZaGMheSxBe4gqzTvecypnEaZGn_GEakRdUqiTntRW_Bx7HPZsXj8s_UhyWZs2YvFgqNBJdJ-oV-l6L3GIXH-cwvej49xidK5S1bpusU2Zl_Jj_3QghedKo1jDxpowfGoW39NsakqetWtKb66f4rvYG--WiRp8mX57TU89Cj-xsTLHcFus231GwRQjXxr1s1vTSIQmw
linkToUnpaywall http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwxV3fb9MwED6N7gF42IAxCAxkJN4gbeK4ic1bVLpWqI0QtFLhxUpip0KL0qpLtXX_Cf8tvjTpKCAknnhJIsWJf-Ry953t-w7gNReKsUwwm6aZbzMtPDtxEvNBjOPmOrEWrsJ453HkD6fsw6w7O4DrJhbmIs2X65Vum3MdTHWFy7e5vuy41Be4yON2RuGgJ0fxzbd8I8O58UtxxknJwaraJVXKr8MwGsgwTek43hjv3umbvnnhcoXXcvCp349kGI7bS5XdgUMfl6ZacDiNPoZfMBed61ObesHs9przJt7GEZ2yKHIk-qZOm4rAmHO6Z9OqJC1_wqv34e66WMabqzjPf7Jh58fwven9duvKRXtdJu305hdiyP8xPA_gqAbGJNxK8kM40MUjOG6STpBaB53AAut9R7YVk9uKSVMx6S3QHhODxgly787nuV7Zk4U5GikicaHIXlSM3a9YNPDJ98gijAnAzOtq4tn5Y5ie9ye9oV1njbBTT3RL27iAIqZCYeCLyISBdzzgWmjhMO0qlnCaMSfzgiANEu4GgQE5SnSZnwgkLe0G3im0ikWhnwLRaSxoVzHlxQnTBqu5CVMIEf2Ei4RxC9zmu8u0plTHzB65rFwrR8hJFI0-y2pca1mx4M3umeWWUOSvpU9QnHYlhcGr3BEWnDXiJWudcymNI819TD_ALHi1u220BS4BxYVerE2ZaorL455vwZOtWO7ebVS7Ud6uZ8HbnZz-1sRK-Pea-Ozfij-HexS3BVXb-M6gVa7W-oXBdWXysv4FfwCONkX4
openUrl ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=LAGC%3A+Lazily+Aggregated+Gradient+Coding+for+Straggler-Tolerant+and+Communication-Efficient+Distributed+Learning&rft.jtitle=IEEE+transaction+on+neural+networks+and+learning+systems&rft.au=Zhang%2C+Jingjing&rft.au=Simeone%2C+Osvaldo&rft.date=2021-03-01&rft.pub=IEEE&rft.issn=2162-237X&rft.volume=32&rft.issue=3&rft.spage=962&rft.epage=974&rft_id=info:doi/10.1109%2FTNNLS.2020.2979762&rft_id=info%3Apmid%2F32287013&rft.externalDocID=9056809
thumbnail_l http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=2162-237X&client=summon
thumbnail_m http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=2162-237X&client=summon
thumbnail_s http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=2162-237X&client=summon