A multi-voxel-activity-based feature selection method for human cognitive states classification by functional magnetic resonance imaging data
Nowadays, various kinds of signals and data were collected to investigate human brain’s activities for disease detection. In particular, the functional magnetic resonance imaging (fMRI) provides a powerful tool for enquiring the brain functions. Learning the activity patterns that are related to the...
Saved in:
| Published in | Cluster computing Vol. 18; no. 1; pp. 199 - 208 |
|---|---|
| Main Authors | , , , , |
| Format | Journal Article |
| Language | English |
| Published |
Boston
Springer US
01.03.2015
Springer Nature B.V |
| Subjects | |
| Online Access | Get full text |
| ISSN | 1386-7857 1573-7543 |
| DOI | 10.1007/s10586-014-0369-9 |
Cover
| Summary: | Nowadays, various kinds of signals and data were collected to investigate human brain’s activities for disease detection. In particular, the functional magnetic resonance imaging (fMRI) provides a powerful tool for enquiring the brain functions. Learning the activity patterns that are related to the specific cognitive states from fMRI data is one of the most critical challenges for neuroscientists. The high dimensional property and noises make fMRI data become difficulty for mining and unfamiliar with conventional approaches. In this paper, we propose a new feature selection method for classifying human cognitive states from fMRI data. The fisher discriminant ratio (FDR) between classes and zero condition is used to measure the activity of voxels. We then choose the most active voxels from the most active regions of interest (ROIs) as the most informative features for Gaussian naïve bayes (GNB) classifier. The proposed method can be used to boost the whole system because it will exclude the non-task-related components and therefore, reduce the processing time and increase the accuracy. The StarPlus dataset and Visual object recognition dataset are used to investigate the performance of the proposed method. The experimental results show that our proposed method has better performance compared to other systems. The accuracy is
∼
96.45 % for StarPlus dataset and 88.4 % for Visual Object Recognition dataset. |
|---|---|
| Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 |
| ISSN: | 1386-7857 1573-7543 |
| DOI: | 10.1007/s10586-014-0369-9 |