Distributed Learning over Networks under Subspace Constraints
This work presents and studies a distributed algorithm for solving optimization problems over networks where agents have individual costs to minimize subject to subspace constraints that require the minimizers across the network to lie in a low-dimensional subspace. The algorithm consists of two ste...
        Saved in:
      
    
          | Published in | 2019 53rd Asilomar Conference on Signals, Systems, and Computers pp. 194 - 198 | 
|---|---|
| Main Authors | , , | 
| Format | Conference Proceeding | 
| Language | English | 
| Published | 
            IEEE
    
        01.11.2019
     | 
| Subjects | |
| Online Access | Get full text | 
| ISSN | 2576-2303 | 
| DOI | 10.1109/IEEECONF44664.2019.9049074 | 
Cover
| Summary: | This work presents and studies a distributed algorithm for solving optimization problems over networks where agents have individual costs to minimize subject to subspace constraints that require the minimizers across the network to lie in a low-dimensional subspace. The algorithm consists of two steps: i) a self-learning step where each agent minimizes its own cost using a stochastic gradient update; ii) and a social-learning step where each agent combines the updated estimates from its neighbors using the entries of a combination matrix that converges in the limit to the projection onto the low-dimensional subspace. We obtain analytical formulas that reveal how the step-size, data statistical properties, gradient noise, and subspace constraints influence the network mean-square-error performance. The results also show that in the small step-size regime, the iterates generated by the distributed algorithm achieve the centralized steady-state MSE performance. We provide simulations to illustrate the theoretical findings. | 
|---|---|
| ISSN: | 2576-2303 | 
| DOI: | 10.1109/IEEECONF44664.2019.9049074 |