Feature selection in machine learning: an exact penalty approach using a Difference of Convex function Algorithm

We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ 0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to...

Full description

Saved in:
Bibliographic Details
Published inMachine learning Vol. 101; no. 1-3; pp. 163 - 186
Main Authors Le Thi, Hoai An, Le, Hoai Minh, Pham Dinh, Tao
Format Journal Article
LanguageEnglish
Published New York Springer US 01.10.2015
Springer Nature B.V
Springer Verlag
Subjects
Online AccessGet full text
ISSN0885-6125
1573-0565
DOI10.1007/s10994-014-5455-y

Cover

More Information
Summary:We develop an exact penalty approach for feature selection in machine learning via the zero-norm ℓ 0 -regularization problem. Using a new result on exact penalty techniques we reformulate equivalently the original problem as a Difference of Convex (DC) functions program. This approach permits us to consider all the existing convex and nonconvex approximation approaches to treat the zero-norm in a unified view within DC programming and DCA framework. An efficient DCA scheme is investigated for the resulting DC program. The algorithm is implemented for feature selection in SVM, that requires solving one linear program at each iteration and enjoys interesting convergence properties. We perform an empirical comparison with some nonconvex approximation approaches, and show using several datasets from the UCI database/Challenging NIPS 2003 that the proposed algorithm is efficient in both feature selection and classification.
Bibliography:SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 14
ObjectType-Article-1
ObjectType-Feature-2
content type line 23
ISSN:0885-6125
1573-0565
DOI:10.1007/s10994-014-5455-y