Shapley value: from cooperative game to explainable artificial intelligence

With the tremendous success of machine learning (ML), concerns about their black-box nature have grown. The issue of interpretability affects trust in ML systems and raises ethical concerns such as algorithmic bias. In recent years, the feature attribution explanation method based on Shapley value h...

Full description

Saved in:
Bibliographic Details
Published inAutonomous intelligent systems Vol. 4; no. 1; pp. 2 - 12
Main Authors Li, Meng, Sun, Hengyang, Huang, Yanjun, Chen, Hong
Format Journal Article
LanguageEnglish
Published Singapore Springer Nature Singapore 01.12.2024
Springer Nature B.V
Springer
Subjects
Online AccessGet full text
ISSN2730-616X
2730-616X
DOI10.1007/s43684-023-00060-8

Cover

More Information
Summary:With the tremendous success of machine learning (ML), concerns about their black-box nature have grown. The issue of interpretability affects trust in ML systems and raises ethical concerns such as algorithmic bias. In recent years, the feature attribution explanation method based on Shapley value has become the mainstream explainable artificial intelligence approach for explaining ML models. This paper provides a comprehensive overview of Shapley value-based attribution methods. We begin by outlining the foundational theory of Shapley value rooted in cooperative game theory and discussing its desirable properties. To enhance comprehension and aid in identifying relevant algorithms, we propose a comprehensive classification framework for existing Shapley value-based feature attribution methods from three dimensions: Shapley value type, feature replacement method, and approximation method. Furthermore, we emphasize the practical application of the Shapley value at different stages of ML model development, encompassing pre-modeling, modeling, and post-modeling phases. Finally, this work summarizes the limitations associated with the Shapley value and discusses potential directions for future research.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:2730-616X
2730-616X
DOI:10.1007/s43684-023-00060-8