Visual Analytics for Explainable Deep Learning

Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning m...

Full description

Saved in:
Bibliographic Details
Published inIEEE computer graphics and applications Vol. 38; no. 4; pp. 84 - 92
Main Authors Choo, Jaegul, Liu, Shixia
Format Magazine Article
LanguageEnglish
Published United States IEEE 01.07.2018
The Institute of Electrical and Electronics Engineers, Inc. (IEEE)
Subjects
Online AccessGet full text
ISSN0272-1716
1558-1756
1558-1756
DOI10.1109/MCG.2018.042731661

Cover

More Information
Summary:Recently, deep learning has been advancing the state of the art in artificial intelligence to a new level, and humans rely on artificial intelligence techniques more than ever. However, even with such unprecedented advancements, the lack of explanation regarding the decisions made by deep learning models and absence of control over their internal processes act as major drawbacks in critical decision-making processes, such as precision medicine and law enforcement. In response, efforts are being made to make deep learning interpretable and controllable by humans. This article reviews visual analytics, information visualization, and machine learning perspectives relevant to this aim, and discusses potential challenges and future research directions.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
content type line 23
ISSN:0272-1716
1558-1756
1558-1756
DOI:10.1109/MCG.2018.042731661