Explainable AI in Credit Card Fraud Detection: SHAP and LIME for Machine Learning Models

With the rapid growth of e-commerce and online banking, credit card scams have become a significant challenge. Traditional approaches to detecting scams have been outper-formed by machine learning techniques. However, the understanding behind the classification of transactions as fraud or legitimate...

Full description

Saved in:
Bibliographic Details
Published inInternational Conference on Signal Processing and Communication (Online) pp. 387 - 392
Main Authors Keerthana, Chirumamilla Satya, Nalluri, Siri Chandana, Muskaan, Simrah, Sadagopan, Poorvie
Format Conference Proceeding
LanguageEnglish
Published IEEE 20.02.2025
Subjects
Online AccessGet full text
ISSN2643-444X
DOI10.1109/ICSC64553.2025.10968935

Cover

More Information
Summary:With the rapid growth of e-commerce and online banking, credit card scams have become a significant challenge. Traditional approaches to detecting scams have been outper-formed by machine learning techniques. However, the understanding behind the classification of transactions as fraud or legitimate is limited. To address this issue, we have implemented two explainable AI (XAI) methods - Local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) across eight machine learning models, which include logistic regression, decision tree, random forest, support vector machine, extreme gradient boost, naive bayes classifier, k-nearest neighbors, and a basic neural network. The results show how individual features of the data set contribute to the decision of a specific prediction. The detailed code for the project is provided here.
ISSN:2643-444X
DOI:10.1109/ICSC64553.2025.10968935