Toward a unified framework for interpreting machine-learning models in neuroimaging

Machine learning is a powerful tool for creating computational models relating brain function to behavior, and its use is becoming widespread in neuroscience. However, these models are complex and often hard to interpret, making it difficult to evaluate their neuroscientific validity and contributio...

Full description

Saved in:
Bibliographic Details
Published inNature protocols Vol. 15; no. 4; pp. 1399 - 1435
Main Authors Kohoutová, Lada, Heo, Juyeon, Cha, Sungmin, Lee, Sungwoo, Moon, Taesup, Wager, Tor D., Woo, Choong-Wan
Format Journal Article
LanguageEnglish
Published London Nature Publishing Group UK 01.04.2020
Nature Publishing Group
Subjects
Online AccessGet full text
ISSN1754-2189
1750-2799
1750-2799
DOI10.1038/s41596-019-0289-5

Cover

More Information
Summary:Machine learning is a powerful tool for creating computational models relating brain function to behavior, and its use is becoming widespread in neuroscience. However, these models are complex and often hard to interpret, making it difficult to evaluate their neuroscientific validity and contribution to understanding the brain. For neuroimaging-based machine-learning models to be interpretable, they should (i) be comprehensible to humans, (ii) provide useful information about what mental or behavioral constructs are represented in particular brain pathways or regions, and (iii) demonstrate that they are based on relevant neurobiological signal, not artifacts or confounds. In this protocol, we introduce a unified framework that consists of model-, feature- and biology-level assessments to provide complementary results that support the understanding of how and why a model works. Although the framework can be applied to different types of models and data, this protocol provides practical tools and examples of selected analysis methods for a functional MRI dataset and multivariate pattern-based predictive models. A user of the protocol should be familiar with basic programming in MATLAB or Python. This protocol will help build more interpretable neuroimaging-based machine-learning models, contributing to the cumulative understanding of brain mechanisms and brain health. Although the analyses provided here constitute a limited set of tests and take a few hours to days to complete, depending on the size of data and available computational resources, we envision the process of annotating and interpreting models as an open-ended process, involving collaborative efforts across multiple studies and laboratories. Neuroimaging-based machine-learning models should be interpretable to neuroscientists and users in applied settings. This protocol describes how to assess the interpretability of models based on fMRI.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
L.K., T.D.W and C.-W.W. conceptualized and developed the protocol and implemented its part for linear models. J.H., S.C., S.L., T.M. and C.-W.W. implemented the part for nonlinear models. T.D.W., C.-W.W. and L.K. contributed to the development of CanlabCore tools. All authors reviewed and revised the manuscript.
Author contributions
ISSN:1754-2189
1750-2799
1750-2799
DOI:10.1038/s41596-019-0289-5