Quantization for Decentralized Learning Under Subspace Constraints

In this article, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task...

Full description

Saved in:
Bibliographic Details
Published inIEEE transactions on signal processing Vol. 71; pp. 2320 - 2335
Main Authors Nassif, Roula, Vlaski, Stefan, Carpentiero, Marco, Matta, Vincenzo, Antonini, Marc, Sayed, Ali H.
Format Journal Article
LanguageEnglish
Published New York IEEE 2023
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Institute of Electrical and Electronics Engineers
Subjects
Online AccessGet full text
ISSN1053-587X
1941-0476
1941-0476
DOI10.1109/TSP.2023.3287333

Cover

More Information
Summary:In this article, we consider decentralized optimization problems where agents have individual cost functions to minimize subject to subspace constraints that require the minimizers across the network to lie in low-dimensional subspaces. This constrained formulation includes consensus or single-task optimization as special cases, and allows for more general task relatedness models such as multitask smoothness and coupled optimization. In order to cope with communication constraints, we propose and study an adaptive decentralized strategy where the agents employ differential randomized quantizers to compress their estimates before communicating with their neighbors. The analysis shows that, under some general conditions on the quantization noise, and for sufficiently small step-sizes <inline-formula><tex-math notation="LaTeX">\mu</tex-math></inline-formula>, the strategy is stable both in terms of mean-square error and average bit rate: by reducing <inline-formula><tex-math notation="LaTeX">\mu</tex-math></inline-formula>, it is possible to keep the estimation errors small (on the order of <inline-formula><tex-math notation="LaTeX">\mu</tex-math></inline-formula>) without increasing indefinitely the bit rate as <inline-formula><tex-math notation="LaTeX">\mu \rightarrow 0</tex-math></inline-formula> when variable-rate quantizers are used. Simulations illustrate the theoretical findings and the effectiveness of the proposed approach, revealing that decentralized learning is achievable at the expense of only a few bits.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1053-587X
1941-0476
1941-0476
DOI:10.1109/TSP.2023.3287333