Multi-model inference using mixed effects from a linear regression based genetic algorithm
Background Different high-dimensional regression methodologies exist for the selection of variables to predict a continuous variable. To improve the variable selection in case clustered observations are present in the training data, an extension towards mixed-effects modeling (MM) is requested, but...
Saved in:
Published in | BMC bioinformatics Vol. 15; no. 1; p. 88 |
---|---|
Main Authors | , , |
Format | Journal Article |
Language | English |
Published |
London
BioMed Central
27.03.2014
BioMed Central Ltd Springer Nature B.V |
Subjects | |
Online Access | Get full text |
ISSN | 1471-2105 1471-2105 |
DOI | 10.1186/1471-2105-15-88 |
Cover
Summary: | Background
Different high-dimensional regression methodologies exist for the selection of variables to predict a continuous variable. To improve the variable selection in case clustered observations are present in the training data, an extension towards mixed-effects modeling (MM) is requested, but may not always be straightforward to implement.
In this article, we developed such a MM extension (GA-MM-MMI) for the automated variable selection by a linear regression based genetic algorithm (GA) using multi-model inference (MMI). We exemplify our approach by training a linear regression model for prediction of resistance to the integrase inhibitor Raltegravir (RAL) on a genotype-phenotype database, with many integrase mutations as candidate covariates. The genotype-phenotype pairs in this database were derived from a limited number of subjects, with presence of multiple data points from the same subject, and with an intra-class correlation of 0.92.
Results
In generation of the RAL model, we took computational efficiency into account by optimizing the GA parameters one by one, and by using
tournament
selection. To derive the main GA parameters we used 3 times 5-fold cross-validation. The number of integrase mutations to be used as covariates in the mixed effects models was 25 (
chrom.size
). A GA
solution
was found when R
2
MM
> 0.95 (
goal.fitness
). We tested three different MMI approaches to combine the results of 100 GA
solutions
into one GA-MM-MMI model. When evaluating the GA-MM-MMI performance on two unseen data sets, a more parsimonious and interpretable model was found (GA-MM-MMI TOP18: mixed-effects model containing the 18 most prevalent mutations in the GA
solutions
, refitted on the training data) with better predictive accuracy (R
2
) in comparison to GA-ordinary least squares (GA-OLS) and Least Absolute Shrinkage and Selection Operator (LASSO).
Conclusions
We have demonstrated improved performance when using GA-MM-MMI for selection of mutations on a genotype-phenotype data set. As we largely automated setting the GA parameters, the method should be applicable on similar datasets with clustered observations. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 |
ISSN: | 1471-2105 1471-2105 |
DOI: | 10.1186/1471-2105-15-88 |