Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks
Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While various approaches, such as shared-memory optimization, multi-task learning, and consensus-based learning (e.g., federated learning and learning...
        Saved in:
      
    
          | Published in | arXiv.org | 
|---|---|
| Main Authors | , , | 
| Format | Paper Journal Article | 
| Language | English | 
| Published | 
        Ithaca
          Cornell University Library, arXiv.org
    
        02.12.2024
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 2331-8422 | 
| DOI | 10.48550/arxiv.2311.04604 | 
Cover
| Abstract | Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While various approaches, such as shared-memory optimization, multi-task learning, and consensus-based learning (e.g., federated learning and learning over graphs), focus on optimizing either local costs or a global cost, there remains a need for further exploration of their interconnections. This paper specifically focuses on a scenario where agents collaborate towards a common task (i.e., optimizing a global cost equal to aggregated local costs) while effectively having distinct individual tasks (i.e., optimizing individual local parameters in a local cost). Each agent's actions can potentially impact other agents' performance through interactions. Notably, each agent has access to only its local zeroth-order oracle (i.e., cost function value) and shares scalar values, rather than gradient vectors, with other agents, leading to communication bandwidth efficiency and agent privacy. Agents employ zeroth-order optimization to update their parameters, and the asynchronous message-passing between them is subject to bounded but possibly random communication delays. This paper presents theoretical convergence analyses and establishes a convergence rate for nonconvex problems. Furthermore, it addresses the relevant use-case of deep learning-based resource allocation in communication networks and conducts numerical experiments in which agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward, e.g., a sum of data rates. | 
    
|---|---|
| AbstractList | Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While various approaches, such as shared-memory optimization, multi-task learning, and consensus-based learning (e.g., federated learning and learning over graphs), focus on optimizing either local costs or a global cost, there remains a need for further exploration of their interconnections. This paper specifically focuses on a scenario where agents collaborate towards a common task (i.e., optimizing a global cost equal to aggregated local costs) while effectively having distinct individual tasks (i.e., optimizing individual local parameters in a local cost). Each agent's actions can potentially impact other agents' performance through interactions. Notably, each agent has access to only its local zeroth-order oracle (i.e., cost function value) and shares scalar values, rather than gradient vectors, with other agents, leading to communication bandwidth efficiency and agent privacy. Agents employ zeroth-order optimization to update their parameters, and the asynchronous message-passing between them is subject to bounded but possibly random communication delays. This paper presents theoretical convergence analyses and establishes a convergence rate for nonconvex problems. Furthermore, it addresses the relevant use-case of deep learning-based resource allocation in communication networks and conducts numerical experiments in which agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward, e.g., a sum of data rates. Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While various approaches, such as shared-memory optimization, multi-task learning, and consensus-based learning (e.g., federated learning and learning over graphs), focus on optimizing either local costs or a global cost, there remains a need for further exploration of their interconnections. This paper specifically focuses on a scenario where agents collaborate towards a common task (i.e., optimizing a global cost equal to aggregated local costs) while effectively having distinct individual tasks (i.e., optimizing individual local parameters in a local cost). Each agent's actions can potentially impact other agents' performance through interactions. Notably, each agent has access to only its local zeroth-order oracle (i.e., cost function value) and shares scalar values, rather than gradient vectors, with other agents, leading to communication bandwidth efficiency and agent privacy. Agents employ zeroth-order optimization to update their parameters, and the asynchronous message-passing between them is subject to bounded but possibly random communication delays. This paper presents theoretical convergence analyses and establishes a convergence rate for nonconvex problems. Furthermore, it addresses the relevant use-case of deep learning-based resource allocation in communication networks and conducts numerical experiments in which agents, acting as transmitters, collaboratively train their individual policies to maximize a global reward, e.g., a sum of data rates.  | 
    
| Author | Patrinos, Panagiotis Behmandpoor, Pourya Moonen, Marc  | 
    
| Author_xml | – sequence: 1 givenname: Pourya surname: Behmandpoor fullname: Behmandpoor, Pourya – sequence: 2 givenname: Marc surname: Moonen fullname: Moonen, Marc – sequence: 3 givenname: Panagiotis surname: Patrinos fullname: Patrinos, Panagiotis  | 
    
| BackLink | https://doi.org/10.48550/arXiv.2311.04604$$DView paper in arXiv https://doi.org/10.1109/TSIPN.2024.3487421$$DView published paper (Access to full text may be restricted)  | 
    
| BookMark | eNotkM1OwzAQhC0EEqX0AThhiXOKYzuJryX8SoUiVC5cok2yaV0au9gJpTwIz0v6c9rV6NvRzpyRY2MNEnIRsqFUUcSuwf3o7yEXYThkMmbyiPS4EGGgJOenZOD9gjHG44RHkeiRv5HfmGLurLGtp8_oPcwweAXvtZlRMCX9QGebeTBxJTo6WTW61r_QaGvoDXgs6a32jdN523T7GMGZ7eFaN3MK9N1jkHYU1Ya-obetK5COlktb7B06ObV13Rp9EF6wWVv36c_JSQVLj4PD7JPp_d00fQzGk4endDQOIOIqEFJwVckyhwJVWAITgLGIoYiRq7xQMqwSCQJUwqI8LyvJVQSsypkqWYWYiz653NvuSstWTtfgNtm2vGxXXkdc7YmVs18t-iZbdClM91PGlUoSpkKmxD-Bs3ej | 
    
| ContentType | Paper Journal Article  | 
    
| Copyright | 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. http://creativecommons.org/licenses/by/4.0  | 
    
| Copyright_xml | – notice: 2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License. – notice: http://creativecommons.org/licenses/by/4.0  | 
    
| DBID | 8FE 8FG ABJCF ABUWG AFKRA AZQEC BENPR BGLVJ CCPQU DWQXO HCIFZ L6V M7S PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS AKY AKZ GOX  | 
    
| DOI | 10.48550/arxiv.2311.04604 | 
    
| DatabaseName | ProQuest SciTech Collection ProQuest Technology Collection Materials Science & Engineering Collection ProQuest Central (Alumni) ProQuest Central UK/Ireland ProQuest Central Essentials ProQuest Central ProQuest Technology Collection ProQuest One Community College ProQuest Central SciTech Premium Collection ProQuest Engineering Collection Engineering Database ProQuest Central Premium ProQuest One Academic (New) ProQuest Publicly Available Content Database ProQuest One Academic Middle East (New) ProQuest One Academic Eastern Edition (DO NOT USE) ProQuest One Applied & Life Sciences ProQuest One Academic ProQuest One Academic UKI Edition ProQuest Central China Engineering Collection arXiv Computer Science arXiv Mathematics arXiv.org  | 
    
| DatabaseTitle | Publicly Available Content Database Engineering Database Technology Collection ProQuest One Academic Middle East (New) ProQuest Central Essentials ProQuest One Academic Eastern Edition ProQuest Central (Alumni Edition) SciTech Premium Collection ProQuest One Community College ProQuest Technology Collection ProQuest SciTech Collection ProQuest Central China ProQuest Central ProQuest One Applied & Life Sciences ProQuest Engineering Collection ProQuest One Academic UKI Edition ProQuest Central Korea Materials Science & Engineering Collection ProQuest Central (New) ProQuest One Academic ProQuest One Academic (New) Engineering Collection  | 
    
| DatabaseTitleList | Publicly Available Content Database | 
    
| Database_xml | – sequence: 1 dbid: GOX name: arXiv.org url: http://arxiv.org/find sourceTypes: Open Access Repository – sequence: 2 dbid: 8FG name: ProQuest Technology Collection url: https://search.proquest.com/technologycollection1 sourceTypes: Aggregation Database  | 
    
| DeliveryMethod | fulltext_linktorsrc | 
    
| Discipline | Physics | 
    
| EISSN | 2331-8422 | 
    
| ExternalDocumentID | 2311_04604 | 
    
| Genre | Working Paper/Pre-Print | 
    
| GroupedDBID | 8FE 8FG ABJCF ABUWG AFKRA ALMA_UNASSIGNED_HOLDINGS AZQEC BENPR BGLVJ CCPQU DWQXO FRJ HCIFZ L6V M7S M~E PHGZM PHGZT PIMPY PKEHL PQEST PQGLB PQQKQ PQUKI PRINS PTHSS AKY AKZ GOX  | 
    
| ID | FETCH-LOGICAL-a528-34328f4dbace81da03ae636ac6e28bc841f74a3a8705bbdf4285a0fb08d0feeb3 | 
    
| IEDL.DBID | BENPR | 
    
| IngestDate | Tue Jul 22 23:17:46 EDT 2025 Mon Jun 30 09:27:55 EDT 2025  | 
    
| IsDoiOpenAccess | true | 
    
| IsOpenAccess | true | 
    
| IsPeerReviewed | false | 
    
| IsScholarly | false | 
    
| Language | English | 
    
| LinkModel | DirectLink | 
    
| MergedId | FETCHMERGED-LOGICAL-a528-34328f4dbace81da03ae636ac6e28bc841f74a3a8705bbdf4285a0fb08d0feeb3 | 
    
| Notes | SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1 content type line 50  | 
    
| OpenAccessLink | https://www.proquest.com/docview/2887708108?pq-origsite=%requestingapplication%&accountid=15518 | 
    
| PQID | 2887708108 | 
    
| PQPubID | 2050157 | 
    
| ParticipantIDs | arxiv_primary_2311_04604 proquest_journals_2887708108  | 
    
| PublicationCentury | 2000 | 
    
| PublicationDate | 20241202 | 
    
| PublicationDateYYYYMMDD | 2024-12-02 | 
    
| PublicationDate_xml | – month: 12 year: 2024 text: 20241202 day: 02  | 
    
| PublicationDecade | 2020 | 
    
| PublicationPlace | Ithaca | 
    
| PublicationPlace_xml | – name: Ithaca | 
    
| PublicationTitle | arXiv.org | 
    
| PublicationYear | 2024 | 
    
| Publisher | Cornell University Library, arXiv.org | 
    
| Publisher_xml | – name: Cornell University Library, arXiv.org | 
    
| SSID | ssj0002672553 | 
    
| Score | 1.8942502 | 
    
| SecondaryResourceType | preprint | 
    
| Snippet | Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While... Distributed learning and adaptation have received significant interest and found wide-ranging applications in machine learning and signal processing. While...  | 
    
| SourceID | arxiv proquest  | 
    
| SourceType | Open Access Repository Aggregation Database  | 
    
| SubjectTerms | Communication Communication networks Communications networks Computer Science - Learning Computer Science - Multiagent Systems Convergence Deep learning Mathematics - Optimization and Control Optimization Resource allocation Transmitters  | 
    
| SummonAdditionalLinks | – databaseName: arXiv.org dbid: GOX link: http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwdV1NS8NAEF1qT15EUWm1yhy8rqbdzSY51mopQlsPLRQvYT9LQVNpqugf8fc6u0kREW9hmeQw-zFvZvPeEHKF-bQWThmaxYpRznhCMwyktKsZZ8aKTChPcB5PxGjOHxbxokFgx4WRm4_Ve6UPrMobBB_da391x_fIHgIFT-adLqrLySDFVdv_2CHGDEN_jtYQL4aH5KAGetCvZuaINGxxTL765WehvR4tJtww9u1HlpY-IoDFCAKY1MOT3ax9tdErYsIU9_NLTZSEW4w3Bu680K3vUYXPtTbqEnwxFSTMS0sHaAWrAnZleeg_-3AVvoDDv_ggMKn-AS9PyGx4PxuMaN0Zgcq4l1JPBk0dN0pqi3hTRkxawYTUwvZSpVPedQmXTOJejJUyDlOMWEZORamJnMX0-ZQ0i3VhWwQSpiVzWiOyS7gTmH1pnTGXmoQZnUnTJq3gz_y1Er_Ivavz4Oo26excnNcLv8x7eGglCDOi9Oz_N8_JPq6RoJcY9Tqkud282QuM7Vt1GSb4G0hnqB0 priority: 102 providerName: Cornell University  | 
    
| Title | Asynchronous Message-Passing and Zeroth-Order Optimization Based Distributed Learning with a Use-Case in Resource Allocation in Communication Networks | 
    
| URI | https://www.proquest.com/docview/2887708108 https://arxiv.org/abs/2311.04604  | 
    
| hasFullText | 1 | 
    
| inHoldings | 1 | 
    
| isFullTextHit | |
| isPrint | |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwfV1LT9tAEB5BIiRutKWClqI9cF1wvOu1fUCIRwJCSogQSFEv1j4REnXSmFb00p_R39uZjU1VVerFsteWD7O7M9_M7HwDcID-tFXBOF5mRnApZM5LNKR8YIUUzqtSGSpwHk_U1b28nmWzNZh0tTB0rLLTiVFRu7mlGPlRirshR_uVFCeLr5y6RlF2tWuhodvWCu44UoytQz8lZqwe9M-Gk-nta9QlVTliaLFKb0YyryO9fHn8fogwZ3BISUKJKDUO_aOco8UZbUF_qhd--QbWfP0WNuJBTdu8g1-nzY_aEqMtuuxsTA1MHjyfIgRGG8R07dhnv5xTvJI4NdkNaoQvbaklO0OL5dgFUeVSlyu8b9lVHxiFY5lm943n5_gVe6xZF9hnp09k8OIfcPivihI2WZ0ib7bhbjS8O7_ibW8FrrO04FROWgTpjLYeEatOhPZKKG2VTwtjCzkIudRC427OjHEBnZRMJ8EkhUuCRwf8PfTqee13gOXCahGsRWyYy6DQf7O2FKFwuXC21G4XdqI8q8WKPqMiUVdR1Luw14m4ardOU_2Z6A__f_0RNnGlRdbFJN2D3vPym_-ECOHZ7MN6Mbrcbycfny5vZngd_xz-Bk_VwpE | 
    
| linkProvider | ProQuest | 
    
| linkToHtml | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwtV1LT9wwEB4BqwpufSGWR-sDHA3Z2HGSA6p4ainsdoUWCXGJ_ERIkF020MIf4d_w3zr2JqAKqTdukRNZynjs-WbG8w3AOvrTWjhlaJ4oRjnjKc3RkNKOZpwZK3KhfIFzry-6Z_zneXI-A89NLYy_VtmcieGgNiPtY-RbMe6GFO1XlP0Y31LfNcpnV5sWGrJurWC2A8VYXdhxbB__oAtXbR_t43pvxPHhwXCvS-suA1QmcUZ9YWXmuFFSW8RuMmLSCiakFjbOlM54x6VcMol6nShlHML1REZORZmJnEVXFKedhRb-ZY6-X2v3oD84fQnyxCJFyM6m2dTAHbYlJw9XvzcRVXU2fU6SIygOQ29sQTBwhx-hNZBjO_kEM7b8DB_CvVBdfYGnneqx1J5Ad3RfkZ7vl3Jp6QARN5o8IktDLuxk5MOjnsKT_MID6Kau7CS7aCAN2ffMvL6pFj7XZK6XxEd_iSRnlaV7-BW5KkmTRyA7196-hhlw-J8CFtKfXlqvvsLwPYS8CHPlqLRLQFKmJXNaIxRNuRPoLmqdM5eZlBmdS9OGpSDPYjxl6yi8qIsg6jasNiIu6p1aFa96tfz_199hvjvsnRQnR_3jFVhAJQ-Ej1G8CnN3k3u7huDkTn2rVYBA8c5K9xccn_1X | 
    
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Ajournal&rft.genre=article&rft.atitle=Asynchronous+Message-Passing+and+Zeroth-Order+Optimization+Based+Distributed+Learning+with+a+Use-Case+in+Resource+Allocation+in+Communication+Networks&rft.jtitle=arXiv.org&rft.au=Behmandpoor%2C+Pourya&rft.au=Moonen%2C+Marc&rft.au=Patrinos%2C+Panagiotis&rft.date=2024-12-02&rft.pub=Cornell+University+Library%2C+arXiv.org&rft.eissn=2331-8422&rft_id=info:doi/10.48550%2Farxiv.2311.04604 |