Speeding-Up Model-Selection in Graphnet via Early-Stopping and Univariate Feature-Screening
The Graph Net (aka S-Lasso), as well as other "spar-sity + structure" priors like TV-L1, are not easily applicable to brain data because of technical problems concerning the selection of the regularization parameters. Also, in their own right, such models lead to challenging high-dimension...
Saved in:
Published in | 2015 International Workshop on Pattern Recognition in NeuroImaging pp. 17 - 20 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
01.06.2015
|
Subjects | |
Online Access | Get full text |
DOI | 10.1109/PRNI.2015.19 |
Cover
Summary: | The Graph Net (aka S-Lasso), as well as other "spar-sity + structure" priors like TV-L1, are not easily applicable to brain data because of technical problems concerning the selection of the regularization parameters. Also, in their own right, such models lead to challenging high-dimensional optimization problems. In this manuscript, we present some heuristics for speeding up the overall optimization process: (a) Early-stopping, whereby one halts the optimization process when the test score(performance on left out data) for the internal cross validation for model-selection stops improving, and (b) univariate feature-screening, whereby irrelevant (non-predictive) voxels are detected and eliminated before the optimization problem is entered, thus reducing the size of the problem. Empirical results with Graph Net on real MRI (Magnetic Resonance Imaging) datasets indicate that these heuristics are a win-win strategy, as they add speed without sacrificing the quality of the predictions. We expect the proposed heuristics to work on other models like TV-L1, etc. |
---|---|
DOI: | 10.1109/PRNI.2015.19 |