Towards Resource-Efficient Edge AI: From Federated Learning to Semi-Supervised Model Personalization
A central question in edge intelligence is "how can an edge device learn its local model with limited data and constrained computing capacity?" In this study, we explore the approach where a global model initialization is first obtained by running federated learning (FL) across multiple ed...
Saved in:
| Published in | IEEE transactions on mobile computing Vol. 23; no. 5; pp. 6104 - 6115 |
|---|---|
| Main Authors | , , |
| Format | Magazine Article |
| Language | English |
| Published |
Los Alamitos
IEEE
01.05.2024
The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1536-1233 1558-0660 |
| DOI | 10.1109/TMC.2023.3316189 |
Cover
| Abstract | A central question in edge intelligence is "how can an edge device learn its local model with limited data and constrained computing capacity?" In this study, we explore the approach where a global model initialization is first obtained by running federated learning (FL) across multiple edge devices, based on which a semi-supervised algorithm is devised for a single edge device to carry out quick adaptation with its local data. Specifically, to account for device heterogeneity and resource constraints, a global model is first trained via FL, where each device conducts multiple local updates only for its customized subnet. A subset of devices can be selected to upload updates for aggregation during each training round. Further, device scheduling is optimized to minimize the training loss of FL, subject to resource constraints, based on the carefully crafted reward function defined as the one-round progress of FL each device can provide. We examine the convergence behavior of FL for the general non-convex case. For semi-supervised model personalization, we use the FL-based model initialization as a teacher network to impute soft labels on unlabeled data, thereby addressing the insufficiency of labeled data. Experiments are conducted to evaluate the performance of the proposed algorithms. |
|---|---|
| AbstractList | A central question in edge intelligence is “how can an edge device learn its local model with limited data and constrained computing capacity?” In this study, we explore the approach where a global model initialization is first obtained by running federated learning (FL) across multiple edge devices, based on which a semi-supervised algorithm is devised for a single edge device to carry out quick adaptation with its local data. Specifically, to account for device heterogeneity and resource constraints, a global model is first trained via FL, where each device conducts multiple local updates only for its customized subnet. A subset of devices can be selected to upload updates for aggregation during each training round. Further, device scheduling is optimized to minimize the training loss of FL, subject to resource constraints, based on the carefully crafted reward function defined as the one-round progress of FL each device can provide. We examine the convergence behavior of FL for the general non-convex case. For semi-supervised model personalization, we use the FL-based model initialization as a teacher network to impute soft labels on unlabeled data, thereby addressing the insufficiency of labeled data. Experiments are conducted to evaluate the performance of the proposed algorithms. |
| Author | Zhang, Junshan Zhang, Zhaofeng Yue, Sheng |
| Author_xml | – sequence: 1 givenname: Zhaofeng orcidid: 0000-0002-4285-670X surname: Zhang fullname: Zhang, Zhaofeng email: zzhan199@asu.edu organization: School of Electrical, Computer and Energy Engineering, Arizona State University, Tempe, AZ, USA – sequence: 2 givenname: Sheng orcidid: 0009-0001-3416-8181 surname: Yue fullname: Yue, Sheng email: shaun.yue@hotmail.com organization: Department of Computer Science and Technology, Tsinghua University, Beijing, China – sequence: 3 givenname: Junshan orcidid: 0000-0002-3840-1753 surname: Zhang fullname: Zhang, Junshan email: jazh@ucdavis.edu organization: College of Engineering, University of California, Davis, CA, USA |
| BookMark | eNp9kM1PAjEQxRuDiYDePXho4nmxH7S03ggBJYFoBM-bbjtLSmCL7aLRv95FOBgPnmaSeb-Zea-DWlWoAKFrSnqUEn23nI96jDDe45xKqvQZalMhVEakJK1Dz2VGGecXqJPSmpBGogdt5Jbhw0SX8AuksI8WsnFZeuuhqvHYrQAPp_d4EsMWT8BBNDU4PAMTK1-tcB3wArY-W-x3EN99ambz4GCDnyGmUJmN_zK1D9UlOi_NJsHVqXbR62S8HD1ms6eH6Wg4yyzTrM4kpVb1nbCCFsoNjCLCUuL6JSu0KQvNNaOFJhJEORCCG0sL2UxVOSCFEU7zLro97t3F8LaHVOfrxlPzR8o54VwyovqsUZGjysaQUoQy30W_NfEzpyQ_ZJk3WeaHLPNTlg0i_yDW1z_W6mj85j_w5gh6APh1h4k-U4p_A1Zxgyw |
| CODEN | ITMCCJ |
| CitedBy_id | crossref_primary_10_1186_s13677_024_00700_1 crossref_primary_10_1109_JSAC_2023_3345397 |
| Cites_doi | 10.1109/tnnls.2022.3166101 10.1109/CVPR.2019.01152 10.1109/INFOCOM41043.2020.9155440 10.1109/72.80236 10.1109/INFOCOM41043.2020.9155272 10.1109/ICDCS47774.2020.00016 10.1609/aaai.v36i8.20825 10.1007/978-3-319-46128-1_50 10.1016/0893-6080(96)83696-3 10.1109/CVPR.2009.5206848 10.1109/JSAC.2022.3143259 10.1609/aaai.v35i6.16638 10.1109/CVPR.2004.383 10.1109/ICDCS47774.2020.00032 10.1109/dac18074.2021.9586241 10.1145/3466772.3467038 10.1109/JIOT.2020.2986803 10.1109/72.248452 10.23919/JCIN.2020.9055108 10.1109/TCAD.2020.2995813 10.1109/RTCSA.1999.811269 10.1109/twc.2020.3031503 10.1109/TWC.2020.3015671 |
| ContentType | Magazine Article |
| Copyright | Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| Copyright_xml | – notice: Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2024 |
| DBID | 97E RIA RIE AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| DOI | 10.1109/TMC.2023.3316189 |
| DatabaseName | IEEE All-Society Periodicals Package (ASPP) 2005–Present IEEE All-Society Periodicals Package (ASPP) 1998–Present IEEE Xplore CrossRef Computer and Information Systems Abstracts Electronics & Communications Abstracts Technology Research Database ProQuest Computer Science Collection Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Academic Computer and Information Systems Abstracts Professional |
| DatabaseTitle | CrossRef Technology Research Database Computer and Information Systems Abstracts – Academic Electronics & Communications Abstracts ProQuest Computer Science Collection Computer and Information Systems Abstracts Advanced Technologies Database with Aerospace Computer and Information Systems Abstracts Professional |
| DatabaseTitleList | Technology Research Database |
| Database_xml | – sequence: 1 dbid: RIE name: IEEE Xplore url: https://proxy.k.utb.cz/login?url=https://ieeexplore.ieee.org/ sourceTypes: Publisher |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISSN | 1558-0660 |
| EndPage | 6115 |
| ExternalDocumentID | 10_1109_TMC_2023_3316189 10254288 |
| Genre | orig-research |
| GrantInformation_xml | – fundername: CPSF grantid: 2023M731956 – fundername: 62302260 – fundername: NSFC – fundername: National Science Foundation grantid: CNS-2203239; CNS-2203412; RINGS-2148253; CCSS-2203238 funderid: 10.13039/501100008982 |
| GroupedDBID | -~X .DC 0R~ 1OL 29I 4.4 5GY 5VS 6IK 97E AAJGR AARMG AASAJ AAWTH ABAZT ABQJQ ABVLG ACGFO ACGFS ACIWK AENEX AETIX AGQYO AGSQL AHBIQ AIBXA AKJIK AKQYR ALMA_UNASSIGNED_HOLDINGS ATWAV BEFXN BFFAM BGNUA BKEBE BPEOZ CS3 DU5 EBS EJD HZ~ H~9 IEDLZ IFIPE IPLJI JAVBF LAI M43 O9- OCL P2P PQQKQ RIA RIE RNI RNS RZB AAYXX CITATION 7SC 7SP 8FD JQ2 L7M L~C L~D |
| ID | FETCH-LOGICAL-c292t-611c84d5c51b8d7a805c10d4f2b9afb93921b906e5f7553ac1b6d4f8f70ba5d93 |
| IEDL.DBID | RIE |
| ISSN | 1536-1233 |
| IngestDate | Mon Jun 30 07:11:22 EDT 2025 Wed Oct 01 01:54:05 EDT 2025 Thu Apr 24 23:04:29 EDT 2025 Wed Aug 27 02:17:07 EDT 2025 |
| IsPeerReviewed | true |
| IsScholarly | true |
| Issue | 5 |
| Language | English |
| License | https://ieeexplore.ieee.org/Xplorehelp/downloads/license-information/IEEE.html https://doi.org/10.15223/policy-029 https://doi.org/10.15223/policy-037 |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-c292t-611c84d5c51b8d7a805c10d4f2b9afb93921b906e5f7553ac1b6d4f8f70ba5d93 |
| Notes | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ORCID | 0000-0002-4285-670X 0000-0002-3840-1753 0009-0001-3416-8181 |
| PQID | 3033620842 |
| PQPubID | 75730 |
| PageCount | 12 |
| ParticipantIDs | crossref_primary_10_1109_TMC_2023_3316189 ieee_primary_10254288 proquest_journals_3033620842 crossref_citationtrail_10_1109_TMC_2023_3316189 |
| ProviderPackageCode | CITATION AAYXX |
| PublicationCentury | 2000 |
| PublicationDate | 2024-05-01 |
| PublicationDateYYYYMMDD | 2024-05-01 |
| PublicationDate_xml | – month: 05 year: 2024 text: 2024-05-01 day: 01 |
| PublicationDecade | 2020 |
| PublicationPlace | Los Alamitos |
| PublicationPlace_xml | – name: Los Alamitos |
| PublicationTitle | IEEE transactions on mobile computing |
| PublicationTitleAbbrev | TMC |
| PublicationYear | 2024 |
| Publisher | IEEE The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| Publisher_xml | – name: IEEE – name: The Institute of Electrical and Electronics Engineers, Inc. (IEEE) |
| References | Polyak (ref19) 1963; 3 Tan (ref34) 2022; 35 ref15 Krizhevsky (ref30) 2009 ref14 ref10 Chen (ref13) 2020 Finn (ref7) ref17 McMahan (ref6) 2017 ref18 Dun (ref38) 2021 Nutini (ref23) 2017 Wei (ref42) 2020 ref50 Wang (ref46) Zhang (ref32) 2021; 34 ref48 ref47 Lai (ref31) Yuan (ref37) 2019 Wang (ref4) 2023 ref49 ref8 ref9 ref3 LeCun (ref29) 1998 ref5 Fallah (ref21) 2020 ref40 Hinton (ref12) 2015 Shamsian (ref33) ref35 Xie (ref45) 2019 ref2 ref1 ref39 Nutini (ref24) Grandvalet (ref41) Lee (ref16) ref26 ref20 Rapp (ref11) 2020 ref22 ref28 ref27 Lee (ref44) Horvath (ref36) Zhang (ref25) 2019 Sohn (ref43) 2020 |
| References_xml | – year: 2015 ident: ref12 article-title: Distilling the knowledge in a neural network – ident: ref9 doi: 10.1109/tnnls.2022.3166101 – ident: ref14 doi: 10.1109/CVPR.2019.01152 – year: 2017 ident: ref23 article-title: Lets make block coordinate descent go fast: Faster greedy rules, message-passing, active-set complexity, and superlinear convergence – ident: ref2 doi: 10.1109/INFOCOM41043.2020.9155440 – ident: ref49 doi: 10.1109/72.80236 – year: 2019 ident: ref45 article-title: Unsupervised data augmentation for consistency training – ident: ref17 doi: 10.1109/INFOCOM41043.2020.9155272 – year: 2021 ident: ref38 article-title: ResIST: Layer-wise decomposition of resnets for distributed training – ident: ref3 doi: 10.1109/ICDCS47774.2020.00016 – start-page: 19 volume-title: Proc. 15th USENIX Symp. Operating Syst. Des. Implementation ident: ref31 article-title: Oort: Efficient federated learning via guided participant selection – ident: ref8 doi: 10.1609/aaai.v36i8.20825 – ident: ref26 doi: 10.1007/978-3-319-46128-1_50 – start-page: 12876 volume-title: Proc. Adv. Neural Inf. Process. Syst. ident: ref36 article-title: FjORD: Fair and accurate federated learning under heterogeneous targets with ordered dropout – start-page: 1273 year: 2017 ident: ref6 article-title: Communication-efficient learning of deep networks from decentralized data publication-title: Artif. Intell. Statist. – ident: ref48 doi: 10.1016/0893-6080(96)83696-3 – start-page: 1126 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref7 article-title: Model-agnostic meta-learning for fast adaptation of deep networks – volume-title: Proc. Workshop Challenges Representation Learn. ident: ref44 article-title: Pseudo-label: The simple and efficient semi-supervised learning method for deep neural networks – start-page: 10738 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref46 article-title: Self-tuning for data-efficient deep learning – ident: ref35 doi: 10.1109/CVPR.2009.5206848 – ident: ref27 doi: 10.1109/JSAC.2022.3143259 – ident: ref5 doi: 10.1609/aaai.v35i6.16638 – volume-title: Proc. Int. Conf. Learn. Representations ident: ref16 article-title: SNIP: Single-shot network pruning based on connection sensitivity – ident: ref28 doi: 10.1109/CVPR.2004.383 – start-page: 1632 volume-title: Proc. Int. Conf. Mach. Learn. ident: ref24 article-title: Coordinate descent converges faster with the gauss-southwell rule than random selection – ident: ref22 doi: 10.1109/ICDCS47774.2020.00032 – year: 1998 ident: ref29 article-title: MNIST handwritten digit database – ident: ref10 doi: 10.1109/dac18074.2021.9586241 – ident: ref20 doi: 10.1145/3466772.3467038 – start-page: 9489 volume-title: Proc. Int. Conf. on Mach. Learn. ident: ref33 article-title: Personalized federated learning using hypernetworks – ident: ref50 doi: 10.1109/JIOT.2020.2986803 – ident: ref47 doi: 10.1109/72.248452 – year: 2020 ident: ref43 article-title: FixMatch: Simplifying semi-supervised learning with consistency and confidence – ident: ref1 doi: 10.23919/JCIN.2020.9055108 – year: 2009 ident: ref30 article-title: Learning multiple layers of features from tiny images – volume: 3 start-page: 643 issue: 4 year: 1963 ident: ref19 article-title: Gradient methods for minimizing functionals publication-title: Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki – ident: ref15 doi: 10.1109/TCAD.2020.2995813 – volume-title: Proc. NeurIPS ident: ref41 article-title: Semi-supervised learning by entropy minimization – year: 2023 ident: ref4 article-title: Warm-start actor-critic: From approximation error to sub-optimality gap – year: 2020 ident: ref13 article-title: Big self-supervised models are strong semi-supervised learners – year: 2019 ident: ref25 article-title: Why gradient clipping accelerates training: A theoretical justification for adaptivity – ident: ref18 doi: 10.1109/RTCSA.1999.811269 – ident: ref39 doi: 10.1109/twc.2020.3031503 – volume: 35 start-page: 19332 year: 2022 ident: ref34 article-title: Federated learning from pre-trained models: A contrastive learning approach publication-title: Adv. Neural Inf. Process. Syst. – year: 2020 ident: ref42 article-title: Theoretical analysis of self-training with deep networks on unlabeled data – ident: ref40 doi: 10.1109/TWC.2020.3015671 – volume: 34 start-page: 10092 year: 2021 ident: ref32 article-title: Parameterized knowledge transfer for personalized federated learning publication-title: Adv. Neural Inf. Process. Syst. – year: 2019 ident: ref37 article-title: Distributed learning of deep neural networks using independent subnet training – year: 2020 ident: ref21 article-title: Personalized federated learning: A meta-learning approach – year: 2020 ident: ref11 article-title: Distributed learning on heterogeneous resource-constrained devices |
| SSID | ssj0018997 |
| Score | 1.3720229 |
| Snippet | A central question in edge intelligence is "how can an edge device learn its local model with limited data and constrained computing capacity?" In this study,... A central question in edge intelligence is “how can an edge device learn its local model with limited data and constrained computing capacity?” In this study,... |
| SourceID | proquest crossref ieee |
| SourceType | Aggregation Database Enrichment Source Index Database Publisher |
| StartPage | 6104 |
| SubjectTerms | Adaptation models Algorithms Computational modeling Constraint modelling Data models Device heterogeneity edge intelligence Federated learning Heterogeneity Internet of Things Machine learning Performance evaluation semi-supervised learning Servers Training |
| Title | Towards Resource-Efficient Edge AI: From Federated Learning to Semi-Supervised Model Personalization |
| URI | https://ieeexplore.ieee.org/document/10254288 https://www.proquest.com/docview/3033620842 |
| Volume | 23 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV3NS8MwFA9uJ0_zY-J0Sg5ePLRrm6ZNvMnYmMKGsA12K81Hh7gvtu7iX-9L2g4_ULwV0oTAL8l7L3nv90PoTkmVCi4kLF6fOEZ42xFhFDk89aWhw2JKmtrh4SgaTMPnGZ2Vxeq2FkZrbZPPtGs-7Vu-Wsu9uSqDHQ7hTMBYDdViFhXFWocnAwgc4oIc1QjLEFK9SXq8Mxl2XSMT7hJi-OH5FxtkRVV-nMTWvPQbaFRNrMgqeXP3uXDl-zfOxn_P_AQ1KuJo_FgsjVN0pFdnqFGpOOByU58jNbGZsztc3eQ7PUsrAQPinprDCE8PuL9dL3HfEE-Ab6pwyco6x_kaj_Xy1RnvN-bQ2UGbUVdb4JfKyS_KPJto2u9NugOn1F5wZMCDHCJKX7JQUUl9wVScMo9K31NhFgieZoKDW-UL7kWaZjGlJJW-iKCVZbEnUqo4uUD11XqlLxFmaUzAMSAqojLUTKUQ0wWmBDcmMtNEtVCnQiORJTG50cdYJDZA8XgC-CUGv6TEr4XuDz02BSnHH_82DRyf_iuQaKF2hXhSbttdAvYcDLrHwuDql27X6BhGD4uUxzaq59u9vgG3JBe3djl-ALel3Vg |
| linkProvider | IEEE |
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwjV07b9swED60zpBObl6IEzfl0CWDFEkkJbJbEdhwGtsIYBvwJogPBUUTO4jlJb8-R0oK0hQpugngQwSO5N3x7r4P4JvRplBSady8MQ0c8XagWJoGsoi1g8MSRrva4ck0HS3YzyVfNsXqvhbGWuuTz2zoPn0s36z11j2V4QlHdyYR4iPscMYYr8u1XoIG6DpkNTyqo5ahtI1KRvJiPrkMHVF4SKlDiJd_aCFPq_LXXewVzLAL03ZpdV7J73BbqVA_vUFt_O-1f4ZuCx1NftSbYw8-2NU-dFseB9Ic6wMwc587uyHtW34w8MASOCEZmFuc4eo7GT6u78nQQU-gdWpIg8t6S6o1mdn7X8Fs--CunQ22OX61O3LTmvl1oechLIaD-eUoaNgXAp3IpEKfMtaCGa55rITJChFxHUeGlYmSRakkGlaxklFqeZlxTgsdqxRbRZlFquBG0iPorNYrewxEFBlF04CalGtmhSnQq0tcEW5GdWmp6cFFK41cN9DkjiHjLvcuSiRzlF_u5Jc38uvB-cuIhxqW4x99D504XvWrJdGDfivxvDm4mxw1Oqr0SLDk5J1hX2F3NJ-M8_HV9PoUPuGfWJ0A2YdO9bi1X9BIqdSZ35rPeAXgpQ |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Towards+Resource-Efficient+Edge+AI%3A+From+Federated+Learning+to+Semi-Supervised+Model+Personalization&rft.jtitle=IEEE+transactions+on+mobile+computing&rft.au=Zhang%2C+Zhaofeng&rft.au=Yue%2C+Sheng&rft.au=Zhang%2C+Junshan&rft.date=2024-05-01&rft.issn=1536-1233&rft.eissn=1558-0660&rft.volume=23&rft.issue=5&rft.spage=6104&rft.epage=6115&rft_id=info:doi/10.1109%2FTMC.2023.3316189&rft.externalDBID=n%2Fa&rft.externalDocID=10_1109_TMC_2023_3316189 |
| thumbnail_l | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/lc.gif&issn=1536-1233&client=summon |
| thumbnail_m | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/mc.gif&issn=1536-1233&client=summon |
| thumbnail_s | http://covers-cdn.summon.serialssolutions.com/index.aspx?isbn=/sc.gif&issn=1536-1233&client=summon |