A faster generalized ADMM-based algorithm using a sequential updating scheme with relaxed step sizes for multiple-block linearly constrained separable convex programming

The multi-block linearly constrained separable convex optimization is frequently applied in numerous applications, including image/signal processing, statistical learning, and data mining, where the objective function is the sum of multiple individual convex functions, and the key constraints are li...

Full description

Saved in:
Bibliographic Details
Published inJournal of computational and applied mathematics Vol. 393; p. 113503
Main Authors Shen, Yuan, Zuo, Yannian, Zhang, Xiayang
Format Journal Article
LanguageEnglish
Published Elsevier B.V 01.09.2021
Subjects
Online AccessGet full text
ISSN0377-0427
1879-1778
DOI10.1016/j.cam.2021.113503

Cover

More Information
Summary:The multi-block linearly constrained separable convex optimization is frequently applied in numerous applications, including image/signal processing, statistical learning, and data mining, where the objective function is the sum of multiple individual convex functions, and the key constraints are linear. A classical approach to solving such optimization problem could be the alternating direction method of multipliers (ADMM). It decomposes the subproblem into a series of small-scale ones such that its per-iteration cost may be meager. ADMM, however, is designed initially for the two-block model, and its convergence cannot be guaranteed for a general multi-block model without additional assumptions. Dai et al. (2017) proposed the algorithm SUSLM (for Sequential Updating Scheme of the Lagrange Multiplier) for separable convex programming problems. The Lagrange multipliers are updated several times at each iteration, and a correction step is imposed at the end of each iteration. In order to derive its convergence property, a correction step is imposed at the end of each iteration. In this paper, we improve the SUSLM algorithm by introducing two controlled parameters in the updating expressions for decision variables and Lagrange multipliers. The condition of step sizes is then relaxed. We show experimentally that our SUSLM algorithm converges faster than SUSLM. Moreover, result comparisons on robust principal component analysis (RPCA) show better performances than other ADMM-based algorithms.
ISSN:0377-0427
1879-1778
DOI:10.1016/j.cam.2021.113503