Explainable Artificial Intelligence for Human-Machine Interaction in Brain Tumor Localization

Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in...

Full description

Saved in:
Bibliographic Details
Published inJournal of personalized medicine Vol. 11; no. 11; p. 1213
Main Authors Esmaeili, Morteza, Vettukattil, Riyas, Banitalebi, Hasan, Krogh, Nina R., Geitung, Jonn Terje
Format Journal Article
LanguageEnglish
Published Basel MDPI AG 16.11.2021
MDPI
Subjects
Online AccessGet full text
ISSN2075-4426
2075-4426
DOI10.3390/jpm11111213

Cover

More Information
Summary:Primary malignancies in adult brains are globally fatal. Computer vision, especially recent developments in artificial intelligence (AI), have created opportunities to automatically characterize and diagnose tumor lesions in the brain. AI approaches have provided scores of unprecedented accuracy in different image analysis tasks, including differentiating tumor-containing brains from healthy brains. AI models, however, perform as a black box, concealing the rational interpretations that are an essential step towards translating AI imaging tools into clinical routine. An explainable AI approach aims to visualize the high-level features of trained models or integrate into the training process. This study aims to evaluate the performance of selected deep-learning algorithms on localizing tumor lesions and distinguishing the lesion from healthy regions in magnetic resonance imaging contrasts. Despite a significant correlation between classification and lesion localization accuracy (R = 0.46, p = 0.005), the known AI algorithms, examined in this study, classify some tumor brains based on other non-relevant features. The results suggest that explainable AI approaches can develop an intuition for model interpretability and may play an important role in the performance evaluation of deep learning models. Developing explainable AI approaches will be an essential tool to improve human–machine interactions and assist in the selection of optimal training methods.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:2075-4426
2075-4426
DOI:10.3390/jpm11111213