Explainable artificial intelligence model to predict brain states from fNIRS signals

Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Le...

Full description

Saved in:
Bibliographic Details
Published inFrontiers in human neuroscience Vol. 16; p. 1029784
Main Authors Shibu, Caleb Jones, Sreedharan, Sujesh, Arun, KM, Kesavadas, Chandrasekharan, Sitaram, Ranganatha
Format Journal Article
LanguageEnglish
Published Switzerland Frontiers Research Foundation 19.01.2023
Frontiers Media S.A
Subjects
Online AccessGet full text
ISSN1662-5161
1662-5161
DOI10.3389/fnhum.2022.1029784

Cover

More Information
Summary:Objective: Most Deep Learning (DL) methods for the classification of functional Near-Infrared Spectroscopy (fNIRS) signals do so without explaining which features contribute to the classification of a task or imagery. An explainable artificial intelligence (xAI) system that can decompose the Deep Learning mode’s output onto the input variables for fNIRS signals is described here. Approach: We propose an xAI-fNIRS system that consists of a classification module and an explanation module. The classification module consists of two separately trained sliding window-based classifiers, namely, (i) 1-D Convolutional Neural Network (CNN); and (ii) Long Short-Term Memory (LSTM). The explanation module uses SHAP (SHapley Additive exPlanations) to explain the CNN model’s output in terms of the model’s input. Main results: We observed that the classification module was able to classify two types of datasets: (a) Motor task (MT), acquired from three subjects; and (b) Motor imagery (MI), acquired from 29 subjects, with an accuracy of over 96% for both CNN and LSTM models. The explanation module was able to identify the channels contributing the most to the classification of MI or MT and therefore identify the channel locations and whether they correspond to oxy- or deoxy-hemoglobin levels in those locations. Significance: The xAI-fNIRS system can distinguish between the brain states related to overt and covert motor imagery from fNIRS signals with high classification accuracy and is able to explain the signal features that discriminate between the brain states of interest.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
Edited by: Shenghong He, University of Oxford, United Kingdom
Specialty section: This article was submitted to Brain-Computer Interfaces, a section of the journal Frontiers in Human Neuroscience
Reviewed by: Jinung An, Daegu Gyeongbuk Institute of Science and Technology (DGIST), South Korea; Xiaofeng Xie, Hainan University, China
ISSN:1662-5161
1662-5161
DOI:10.3389/fnhum.2022.1029784