ExplAIn: Explanatory artificial intelligence for diabetic retinopathy diagnosis

•An explanatory artificial intelligence framework is presented for image classification.•It classifies each pixel and the image as a whole with image-level supervision only.•A novel self-supervised approach is presented for foreground/background separation.•The image classification process can be ex...

Full description

Saved in:
Bibliographic Details
Published inMedical image analysis Vol. 72; p. 102118
Main Authors Quellec, Gwenolé, Al Hajj, Hassan, Lamard, Mathieu, Conze, Pierre-Henri, Massin, Pascale, Cochener, Béatrice
Format Journal Article
LanguageEnglish
Published Amsterdam Elsevier B.V 01.08.2021
Elsevier BV
Elsevier
Subjects
Online AccessGet full text
ISSN1361-8415
1361-8423
1361-8431
1361-8423
DOI10.1016/j.media.2021.102118

Cover

More Information
Summary:•An explanatory artificial intelligence framework is presented for image classification.•It classifies each pixel and the image as a whole with image-level supervision only.•A novel self-supervised approach is presented for foreground/background separation.•The image classification process can be explained visually and possibly in text form.•For diabetic retinopathy grading, explainability does not reduce classification performance. [Display omitted] In recent years, Artificial Intelligence (AI) has proven its relevance for medical decision support. However, the “black-box” nature of successful AI algorithms still holds back their wide-spread deployment. In this paper, we describe an eXplanatory Artificial Intelligence (XAI) that reaches the same level of performance as black-box AI, for the task of classifying Diabetic Retinopathy (DR) severity using Color Fundus Photography (CFP). This algorithm, called ExplAIn, learns to segment and categorize lesions in images; the final image-level classification directly derives from these multivariate lesion segmentations. The novelty of this explanatory framework is that it is trained from end to end, with image supervision only, just like black-box AI algorithms: the concepts of lesions and lesion categories emerge by themselves. For improved lesion localization, foreground/background separation is trained through self-supervision, in such a way that occluding foreground pixels transforms the input image into a healthy-looking image. The advantage of such an architecture is that automatic diagnoses can be explained simply by an image and/or a few sentences. ExplAIn is evaluated at the image level and at the pixel level on various CFP image datasets. We expect this new framework, which jointly offers high classification performance and explainability, to facilitate AI deployment.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:1361-8415
1361-8423
1361-8431
1361-8423
DOI:10.1016/j.media.2021.102118