Sparse and robust elastic net support vector machine with bounded concave loss for large-scale problems

The elastic net support vector machine is an extensively employed method for addressing a range of classification tasks. Nevertheless, a significant drawback of the elastic net support vector machine is its high computational cost when dealing with large-scale classification problems. To address thi...

Full description

Saved in:
Bibliographic Details
Published inEngineering applications of artificial intelligence Vol. 162; p. 112352
Main Authors Wang, Huajun, Li, Wenqian
Format Journal Article
LanguageEnglish
Published Elsevier Ltd 20.12.2025
Subjects
Online AccessGet full text
ISSN0952-1976
1873-6769
DOI10.1016/j.engappai.2025.112352

Cover

More Information
Summary:The elastic net support vector machine is an extensively employed method for addressing a range of classification tasks. Nevertheless, a significant drawback of the elastic net support vector machine is its high computational cost when dealing with large-scale classification problems. To address this drawback, we first introduce an innovative non-convex elastic net support vector machine model that employs our newly created bounded concave loss function, which effectively attains both sparsity and robustness. Based on proximal stationary point, we have effectively constructed an innovative optimality theory tailored for our newly created elastic net support vector machine model. By leveraging the innovative optimality theory, we have successfully developed a new and exceptionally effective algorithm designed to enhance computational efficiency through the division of the entire dataset into two distinct categories: working sets and non-working sets. During each learning cycle, the parameters associated with the non-working set remain unchanged. In contrast, the parameters related to the working set are subject to updates. Consequently, our new algorithm facilitates quicker modifications on smaller datasets, improving runtime efficiency and lowering computational complexity. Numerical experiments have demonstrated significant efficiency, particularly regarding computational speed, the number of support vectors, and classification accuracy, surpassing eleven other leading solvers.
ISSN:0952-1976
1873-6769
DOI:10.1016/j.engappai.2025.112352