Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization A Survey

We consider optimization problems with a cost function consisting of a large number of component functions, such as minimize$\sum\limits_{i = 1}^m {{f_i}(x)} $subject to$x \in X$, (4.1) where${f_i}:{R^n} \mapsto R$, i = 1 , . . . ,mare real-valued functions, andXis a closed convex set.¹ We focus on...

Full description

Saved in:
Bibliographic Details
Published inOptimization for Machine Learning p. 85
Main Author Dimitri P. Bertsekas
Format Book Chapter
LanguageEnglish
Published United States The MIT Press 30.09.2011
MIT Press
Subjects
Online AccessGet full text
ISBN026201646X
9780262016469
DOI10.7551/mitpress/8996.003.0006

Cover

More Information
Summary:We consider optimization problems with a cost function consisting of a large number of component functions, such as minimize$\sum\limits_{i = 1}^m {{f_i}(x)} $subject to$x \in X$, (4.1) where${f_i}:{R^n} \mapsto R$, i = 1 , . . . ,mare real-valued functions, andXis a closed convex set.¹ We focus on the case where the number of componentsmis very large, and there is an incentive to use incremental methods that operate on a single component${f_i}$at each iteration, rather than on the entire cost function. If each incremental iteration tends to make reasonable progress in some “average” sense, then, depending
ISBN:026201646X
9780262016469
DOI:10.7551/mitpress/8996.003.0006