Explainable AI in Credit Card Fraud Detection: SHAP and LIME for Machine Learning Models
With the rapid growth of e-commerce and online banking, credit card scams have become a significant challenge. Traditional approaches to detecting scams have been outper-formed by machine learning techniques. However, the understanding behind the classification of transactions as fraud or legitimate...
Saved in:
Published in | International Conference on Signal Processing and Communication (Online) pp. 387 - 392 |
---|---|
Main Authors | , , , |
Format | Conference Proceeding |
Language | English |
Published |
IEEE
20.02.2025
|
Subjects | |
Online Access | Get full text |
ISSN | 2643-444X |
DOI | 10.1109/ICSC64553.2025.10968935 |
Cover
Summary: | With the rapid growth of e-commerce and online banking, credit card scams have become a significant challenge. Traditional approaches to detecting scams have been outper-formed by machine learning techniques. However, the understanding behind the classification of transactions as fraud or legitimate is limited. To address this issue, we have implemented two explainable AI (XAI) methods - Local interpretable model-agnostic explanations (LIME) and Shapley additive explanations (SHAP) across eight machine learning models, which include logistic regression, decision tree, random forest, support vector machine, extreme gradient boost, naive bayes classifier, k-nearest neighbors, and a basic neural network. The results show how individual features of the data set contribute to the decision of a specific prediction. The detailed code for the project is provided here. |
---|---|
ISSN: | 2643-444X |
DOI: | 10.1109/ICSC64553.2025.10968935 |