Smoothing gradient descent algorithm for the composite sparse optimization

Composite sparsity generalizes the standard sparsity that considers the sparsity on a linear transformation of the variables. In this paper, we study the composite sparse optimization problem consisting of minimizing the sum of a nondifferentiable loss function and the $ {\mathcal{\ell}_0} $ penalty...

Full description

Saved in:
Bibliographic Details
Published inAIMS mathematics Vol. 9; no. 12; pp. 33401 - 33422
Main Authors Yang, Wei, Pan, Lili, Wan, Jinhui
Format Journal Article
LanguageEnglish
Published AIMS Press 01.01.2024
Subjects
Online AccessGet full text
ISSN2473-6988
2473-6988
DOI10.3934/math.20241594

Cover

More Information
Summary:Composite sparsity generalizes the standard sparsity that considers the sparsity on a linear transformation of the variables. In this paper, we study the composite sparse optimization problem consisting of minimizing the sum of a nondifferentiable loss function and the $ {\mathcal{\ell}_0} $ penalty term of a matrix times the coefficient vector. First, we consider an exact continuous relaxation problem with a capped-$ {\mathcal{\ell}_1} $ penalty that has the same optimal solution as the primal problem. Specifically, we propose the lifted stationary point of the relaxation problem and then establish the equivalence of the original and relaxation problems. Second, we propose a smoothing gradient descent (SGD) algorithm for the continuous relaxation problem, which solves the subproblem inexactly since the objective function is inseparable. We show that if the sequence generated by the SGD algorithm has an accumulation point, then it is a lifted stationary point. At last, we present several computational examples to illustrate the efficiency of the algorithm.
ISSN:2473-6988
2473-6988
DOI:10.3934/math.20241594