On convergence of a q-random coordinate constrained algorithm for non-convex problems

We propose a random coordinate descent algorithm for optimizing a non-convex objective function subject to one linear constraint and simple bounds on the variables. Although it is common use to update only two random coordinates simultaneously in each iteration of a coordinate descent algorithm, our...

Full description

Saved in:
Bibliographic Details
Published inJournal of global optimization Vol. 90; no. 4; pp. 843 - 868
Main Authors Ghaffari-Hadigheh, A., Sinjorgo, L., Sotirov, R.
Format Journal Article
LanguageEnglish
Published New York Springer US 01.12.2024
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0925-5001
1573-2916
1573-2916
DOI10.1007/s10898-024-01429-6

Cover

More Information
Summary:We propose a random coordinate descent algorithm for optimizing a non-convex objective function subject to one linear constraint and simple bounds on the variables. Although it is common use to update only two random coordinates simultaneously in each iteration of a coordinate descent algorithm, our algorithm allows updating arbitrary number of coordinates. We provide a proof of convergence of the algorithm. The convergence rate of the algorithm improves when we update more coordinates per iteration. Numerical experiments on large scale instances of different optimization problems show the benefit of updating many coordinates simultaneously.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0925-5001
1573-2916
1573-2916
DOI:10.1007/s10898-024-01429-6