Distributed subgradient methods and quantization effects

We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed s...

Full description

Saved in:
Bibliographic Details
Published in2008 47th IEEE Conference on Decision and Control pp. 4177 - 4184
Main Authors Nedic, A., Olshevsky, A., Ozdaglar, A., Tsitsiklis, J.N.
Format Conference Proceeding
LanguageEnglish
Published IEEE 01.01.2008
Subjects
Online AccessGet full text
ISBN9781424431236
1424431239
ISSN0191-2216
DOI10.1109/CDC.2008.4738860

Cover

More Information
Summary:We consider a convex unconstrained optimization problem that arises in a network of agents whose goal is to cooperatively optimize the sum of the individual agent objective functions through local computations and communications. For this problem, we use averaging algorithms to develop distributed subgradient methods that can operate over a time-varying topology. Our focus is on the convergence rate of these methods and the degradation in performance when only quantized information is available. Based on our recent results on the convergence time of distributed averaging algorithms, we derive improved upper bounds on the convergence rate of the unquantized subgradient method. We then propose a distributed subgradient method under the additional constraint that agents can only store and communicate quantized information, and we provide bounds on its convergence rate that highlight the dependence on the number of quantization levels.
ISBN:9781424431236
1424431239
ISSN:0191-2216
DOI:10.1109/CDC.2008.4738860