S-NEAR-DGD: A Flexible Distributed Stochastic Gradient Method for Inexact Communication
We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexibl...
        Saved in:
      
    
          | Published in | arXiv.org | 
|---|---|
| Main Authors | , | 
| Format | Paper | 
| Language | English | 
| Published | 
        Ithaca
          Cornell University Library, arXiv.org
    
        30.01.2021
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 2331-8422 | 
Cover
| Summary: | We present and analyze a stochastic distributed method (S-NEAR-DGD) that can tolerate inexact computation and inaccurate information exchange to alleviate the problems of costly gradient evaluations and bandwidth-limited communication in large-scale systems. Our method is based on a class of flexible, distributed first order algorithms that allow for the trade-off of computation and communication to best accommodate the application setting. We assume that all the information exchange between nodes is subject to random distortion and that only stochastic approximations of the true gradients are available. Our theoretical results prove that the proposed algorithm converges linearly in expectation to a neighborhood of the optimal solution for strongly convex objective functions with Lipschitz gradients. We characterize the dependence of this neighborhood on algorithm and network parameters, the quality of the communication channel and the precision of the stochastic gradient approximations used. Finally, we provide numerical results to evaluate the empirical performance of our method. | 
|---|---|
| Bibliography: | content type line 50 SourceType-Working Papers-1 ObjectType-Working Paper/Pre-Print-1  | 
| ISSN: | 2331-8422 |