The feature selection bias problem in relation to high-dimensional gene data

•We analyze seven gene datasets to show the feature selection bias effect on the accuracy measure.•We examine its importance by an empirical study of four feature selection methods.•For evaluating feature selection performance we use double cross-validation.•By the way, we examine the stability of t...

Full description

Saved in:
Bibliographic Details
Published inArtificial intelligence in medicine Vol. 66; pp. 63 - 71
Main Authors Krawczuk, Jerzy, Łukaszuk, Tomasz
Format Journal Article
LanguageEnglish
Published Netherlands Elsevier B.V 01.01.2016
Subjects
Online AccessGet full text
ISSN0933-3657
1873-2860
DOI10.1016/j.artmed.2015.11.001

Cover

More Information
Summary:•We analyze seven gene datasets to show the feature selection bias effect on the accuracy measure.•We examine its importance by an empirical study of four feature selection methods.•For evaluating feature selection performance we use double cross-validation.•By the way, we examine the stability of the feature selection methods.•We recommend cross-validation for feature selection in order to reduce the selection bias. Feature selection is a technique widely used in data mining. The aim is to select the best subset of features relevant to the problem being considered. In this paper, we consider feature selection for the classification of gene datasets. Gene data is usually composed of just a few dozen objects described by thousands of features. For this kind of data, it is easy to find a model that fits the learning data. However, it is not easy to find one that will simultaneously evaluate new data equally well as learning data. This overfitting issue is well known as regards classification and regression, but it also applies to feature selection. We address this problem and investigate its importance in an empirical study of four feature selection methods applied to seven high-dimensional gene datasets. We chose datasets that are well studied in the literature—colon cancer, leukemia and breast cancer. All the datasets are characterized by a significant number of features and the presence of exactly two decision classes. The feature selection methods used are ReliefF, minimum redundancy maximum relevance, support vector machine-recursive feature elimination and relaxed linear separability. Our main result reveals the existence of positive feature selection bias in all 28 experiments (7 datasets and 4 feature selection methods). Bias was calculated as the difference between validation and test accuracies and ranges from 2.6% to as much as 41.67%. The validation accuracy (biased accuracy) was calculated on the same dataset on which the feature selection was performed. The test accuracy was calculated for data that was not used for feature selection (by so called external cross-validation). This work provides evidence that using the same dataset for feature selection and learning is not appropriate. We recommend using cross-validation for feature selection in order to reduce selection bias.
Bibliography:ObjectType-Article-2
SourceType-Scholarly Journals-1
ObjectType-Feature-1
content type line 23
ObjectType-Undefined-3
ObjectType-Article-1
ObjectType-Feature-2
ISSN:0933-3657
1873-2860
DOI:10.1016/j.artmed.2015.11.001