De-noising boosting methods for variable selection and estimation subject to error-prone variables

Boosting is one of the most powerful statistical learning methods that combines multiple weak learners into a strong learner. The main idea of boosting is to sequentially apply the algorithm to enhance its performance. Recently, boosting methods have been implemented to handle variable selection. Ho...

Full description

Saved in:
Bibliographic Details
Published inStatistics and computing Vol. 33; no. 2
Main Author Chen, Li-Pang
Format Journal Article
LanguageEnglish
Published New York Springer US 01.04.2023
Springer Nature B.V
Subjects
Online AccessGet full text
ISSN0960-3174
1573-1375
DOI10.1007/s11222-023-10209-3

Cover

More Information
Summary:Boosting is one of the most powerful statistical learning methods that combines multiple weak learners into a strong learner. The main idea of boosting is to sequentially apply the algorithm to enhance its performance. Recently, boosting methods have been implemented to handle variable selection. However, little work has been available to deal with complex data such as measurement error in covariates. In this paper, we adopt the boosting method to do variable selection, especially in the presence of measurement error. We develop two different approximated correction approaches to deal with different types of responses, and meanwhile, eliminate measurement error effects. In addition, the proposed algorithms are easy to implement and are able to derive precise estimators. Throughout numerical studies under various settings, the proposed method outperforms other competitive approaches.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0960-3174
1573-1375
DOI:10.1007/s11222-023-10209-3