Practical Explainable AI Using Python - Artificial Intelligence Model Explanations Using Python-Based Libraries, Extensions, and Frameworks

Learn the ins and outs of decisions, biases, and reliability of AI algorithms and how to make sense of these predictions. This book explores the so-called black-box models to boost the adaptability, interpretability, and explainability of the decisions made by AI algorithms using frameworks such as...

Full description

Saved in:
Bibliographic Details
Main Author Mishra, Pradeepta
Format eBook Book
LanguageEnglish
Published Berkeley, CA Apress, an imprint of Springer Nature 2022
Apress
Apress L. P
Edition1
Subjects
Online AccessGet full text
ISBN1484271572
9781484271575
9781484271582
1484271580
DOI10.1007/978-1-4842-7158-2

Cover

Table of Contents:
  • Title Page Introduction Table of Contents 1. Model Explainability and Interpretability 2. AI Ethics, Biasness, and Reliability 3. Explainability for Linear Models 4. Explainability for Non-Linear Models 5. Explainability for Ensemble Models 6. Explainability for Time Series Models 7. Explainability for NLP 8. AI Model Fairness Using a What-if Scenario 9. Explainability for Deep Learning Models 10. Counterfactual Explanations for XAI Models 11. Contrastive Explanations for Machine Learning 12. Model-Agnostic Explanations by Identifying Prediction Invariance 13. Model Explainability for Rule-Based Expert Systems 14. Model Explainability for Computer Vision Index
  • Using SHAP Multiclass Categorical Boosting Model -- Using SHAP for Light GBM Model Explanation -- Conclusion -- Chapter 6: Explainability for Time Series Models -- Time Series Models -- Knowing Which Model Is Good -- Strategy for Forecasting -- Confidence Interval of Predictions -- What Happens to Trust? -- Time Series: LIME -- Conclusion -- Chapter 7: Explainability for NLP -- Natural Language Processing Tasks -- Explainability for Text Classification -- Dataset for Text Classification -- Explaining Using ELI5 -- Calculating the Feature Weights for Local Explanation -- Local Explanation Example 1 -- Local Explanation Example 2 -- Local Explanation Example 3 -- Explanation After Stop Word Removal -- N-gram-Based Text Classification -- Multi-Class Label Text Classification Explainability -- Local Explanation Example 1 -- Local Explanation Example 2 -- Local Explanation Example 1 -- Conclusion -- Chapter 8: AI Model Fairness Using a What-If Scenario -- What Is the WIT? -- Installing the WIT -- Evaluation Metric -- Conclusion -- Chapter 9: Explainability for Deep Learning Models -- Explaining DL Models -- Using SHAP with DL -- Using Deep SHAP -- Using Alibi -- SHAP Explainer for Deep Learning -- Another Example of Image Classification -- Using SHAP -- Deep Explainer for Tabular Data -- Conclusion -- Chapter 10: Counterfactual Explanations for XAI Models -- What Are CFEs? -- Implementation of CFEs -- CFEs Using Alibi -- Counterfactual for Regression Tasks -- Conclusion -- Chapter 11: Contrastive Explanations for Machine Learning -- What Is CE for ML? -- CEM Using Alibi -- Comparison of an Original Image vs. an Autoencoder-Generated Image -- CEM for Tabular Data Explanations -- Conclusion -- Chapter 12: Model-Agnostic Explanations by Identifying Prediction Invariance -- What Is Model Agnostic? -- What Is an Anchor? -- Anchor Explanations Using Alibi
  • Anchor Text for Text Classification -- Anchor Image for Image Classification -- Conclusion -- Chapter 13: Model Explainability for Rule-Based Expert Systems -- What Is an Expert System? -- Backward and Forward Chaining -- Rule Extraction Using Scikit-Learn -- Need for a Rule-Based System -- Challenges of an Expert System -- Conclusion -- Chapter 14: Model Explainability for Computer Vision -- Why Explainability for Image Data? -- Anchor Image Using Alibi -- Integrated Gradients Method -- Conclusion -- Index
  • Intro -- Table of Contents -- About the Author -- About the Technical Reviewers -- Acknowledgments -- Introduction -- Chapter 1: Model Explainability and Interpretability -- Establishing the Framework -- Artificial Intelligence -- Need for XAI -- Explainability vs. Interpretability -- Explainability Types -- Tools for Model Explainability -- SHAP -- LIME -- ELI5 -- Skater -- Skope_rules -- Methods of XAI for ML -- XAI Compatible Models -- XAI Meets Responsible AI -- Evaluation of XAI -- Conclusion -- Chapter 2: AI Ethics, Biasness, and Reliability -- AI Ethics Primer -- Biasness in AI -- Data Bias -- Algorithmic Bias -- Bias Mitigation Process -- Interpretation Bias -- Training Bias -- Reliability in AI -- Conclusion -- Chapter 3: Explainability for Linear Models -- Linear Models -- Linear Regression -- VIF and the Problems It Can Generate -- Final Model -- Model Explainability -- Trust in ML Model: SHAP -- Local Explanation and Individual Predictions in a ML Model -- Global Explanation and Overall Predictions in ML Model -- LIME Explanation and ML Model -- Skater Explanation and ML Model -- ELI5 Explanation and ML Model -- Logistic Regression -- Interpretation -- LIME Inference -- Conclusion -- Chapter 4: Explainability for Non-Linear Models -- Non-Linear Models -- Decision Tree Explanation -- Data Preparation for the Decision Tree Model -- Creating the Model -- Decision Tree - SHAP -- Partial Dependency Plot -- PDP Using Scikit-Learn -- Non-Linear Model Explanation - LIME -- Non-Linear Explanation - Skope-Rules -- Conclusion -- Chapter 5: Explainability for Ensemble Models -- Ensemble Models -- Types of Ensemble Models -- Why Ensemble Models? -- Using SHAP for Ensemble Models -- Using the Interpret Explaining Boosting Model -- Ensemble Classification Model: SHAP -- Using SHAP to Explain Categorical Boosting Models