Online convex optimization using coordinate descent algorithms

This paper considers the problem of online optimization where the objective function is time-varying. In particular, we extend coordinate descent type algorithms to the online case, where the objective function varies after a finite number of iterations of the algorithm. Instead of solving the probl...

Full description

Saved in:
Bibliographic Details
Published inAutomatica (Oxford) Vol. 165; p. 111681
Main Authors Lin, Yankai, Shames, Iman, Nešić, Dragan
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 01.07.2024
Subjects
Online AccessGet full text
ISSN0005-1098
1873-2836
1873-2836
DOI10.1016/j.automatica.2024.111681

Cover

More Information
Summary:This paper considers the problem of online optimization where the objective function is time-varying. In particular, we extend coordinate descent type algorithms to the online case, where the objective function varies after a finite number of iterations of the algorithm. Instead of solving the problem exactly at each time step, we only apply a finite number of iterations at each time step. Commonly used notions of regret are used to measure the performance of the online algorithm. Moreover, coordinate descent algorithms with different updating rules are considered, including both deterministic and stochastic rules that are developed in the literature of classical offline optimization. A thorough regret analysis is given for each case. Finally, numerical simulations are provided to illustrate the theoretical results.
ISSN:0005-1098
1873-2836
1873-2836
DOI:10.1016/j.automatica.2024.111681