Recent Emerging Techniques in Explainable Artificial Intelligence to Enhance the Interpretable and Understanding of AI Models for Human

Recent advancements in Explainable Artificial Intelligence (XAI) aim to bridge the gap between complex artificial intelligence (AI) models and human understanding, fostering trust and usability in AI systems. However, challenges persist in comprehensively interpreting these models, hindering their w...

Full description

Saved in:
Bibliographic Details
Published inNeural processing letters Vol. 57; no. 1; p. 16
Main Authors Mathew, Daniel Enemona, Ebem, Deborah Uzoamaka, Ikegwu, Anayo Chukwu, Ukeoma, Pamela Eberechukwu, Dibiaezue, Ngozi Fidelia
Format Journal Article
LanguageEnglish
Published Dordrecht Springer Nature B.V 07.02.2025
Subjects
Online AccessGet full text
ISSN1573-773X
1370-4621
1573-773X
DOI10.1007/s11063-025-11732-2

Cover

More Information
Summary:Recent advancements in Explainable Artificial Intelligence (XAI) aim to bridge the gap between complex artificial intelligence (AI) models and human understanding, fostering trust and usability in AI systems. However, challenges persist in comprehensively interpreting these models, hindering their widespread adoption. This study addresses these challenges by exploring recently emerging techniques in XAI. The primary problem addressed is the lack of transparency and interpretability in AI models to humanity for institution-wide use, which undermines user trust and inhibits their integration into critical decision-making processes. Through an in-depth review, this study identifies the objectives of enhancing the interpretability of AI models and improving human understanding of their decision-making processes. Various methodological approaches, including post-hoc explanations, model transparency methods, and interactive visualization techniques, are investigated to elucidate AI model behaviours. We further present techniques and methods to make AI models more interpretable and understandable to humans including their strengths and weaknesses to demonstrate promising advancements in model interpretability, facilitating better comprehension of complex AI systems by humans. In addition, we provide the application of XAI in local use cases. Challenges, solutions, and open research directions were highlighted to clarify these compelling XAI utilization challenges. The implications of this research are profound, as enhanced interpretability fosters trust in AI systems across diverse applications, from healthcare to finance. By empowering users to understand and scrutinize AI decisions, these techniques pave the way for more responsible and accountable AI deployment.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1573-773X
1370-4621
1573-773X
DOI:10.1007/s11063-025-11732-2