Interpreting Machine Learning Models - Learn Model Interpretability and Explainability Methods
Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning interpretability along with different types of explainability algorithms. You'll begin by reviewing the theoretical aspects of machine...
Saved in:
| Main Author | |
|---|---|
| Format | eBook |
| Language | English |
| Published |
Berkeley, CA
Apress, an imprint of Springer Nature
2022
Apress Apress L. P |
| Edition | 1 |
| Subjects | |
| Online Access | Get full text |
| ISBN | 9781484278017 1484278011 9781484278024 148427802X |
| DOI | 10.1007/978-1-4842-7802-4 |
Cover
| Abstract | Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning interpretability along with different types of explainability algorithms. You'll begin by reviewing the theoretical aspects of machine learning interpretability. In the first few sections you'll learn what interpretability is, what the common properties of interpretability methods are, the general taxonomy for classifying methods into different sections, and how the methods should be assessed in terms of human factors and technical requirements. Using a holistic approach featuring detailed examples, this book also includes quotes from actual business leaders and technical experts to showcase how real life users perceive interpretability and its related methods, goals, stages, and properties. Progressing through the book, you'll dive deep into the technical details of the interpretability domain. Starting off with the general frameworks of different types of methods, you'll use a data set to see how each method generates output with actual code and implementations. |
|---|---|
| AbstractList | Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning interpretability along with different types of explainability algorithms. You'll begin by reviewing the theoretical aspects of machine learning interpretability. In the first few sections you'll learn what interpretability is, what the common properties of interpretability methods are, the general taxonomy for classifying methods into different sections, and how the methods should be assessed in terms of human factors and technical requirements. Using a holistic approach featuring detailed examples, this book also includes quotes from actual business leaders and technical experts to showcase how real life users perceive interpretability and its related methods, goals, stages, and properties. Progressing through the book, you'll dive deep into the technical details of the interpretability domain. Starting off with the general frameworks of different types of methods, you'll use a data set to see how each method generates output with actual code and implementations. These methods are divided into different types based on their explanation frameworks, with some common categories listed as feature importance based methods, rule based methods, saliency maps methods, counterfactuals, and concept attribution.The book concludes by showing how data effects interpretability and some of the pitfalls prevalent when using explainability methods. What You'll Learn * Understand machine learning model interpretability * Explore the different properties and selection requirements of various interpretability methods * Review the different types of interpretability methods used in real life by technical experts * Interpret the output of various methods and understand the underlying problems Who This Book Is For Machine learning practitioners, data scientists and statisticians interested in making machine learning models interpretable and explainable; academic students pursuing courses of data science and business analytics. Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning interpretability along with different types of explainability algorithms. You'll begin by reviewing the theoretical aspects of machine learning interpretability. In the first few sections you'll learn what interpretability is, what the common properties of interpretability methods are, the general taxonomy for classifying methods into different sections, and how the methods should be assessed in terms of human factors and technical requirements. Using a holistic approach featuring detailed examples, this book also includes quotes from actual business leaders and technical experts to showcase how real life users perceive interpretability and its related methods, goals, stages, and properties. Progressing through the book, you'll dive deep into the technical details of the interpretability domain. Starting off with the general frameworks of different types of methods, you'll use a data set to see how each method generates output with actual code and implementations. These methods are divided into different types based on their explanation frameworks, with some common categories listed as feature importance based methods, rule based methods, saliency maps methods, counterfactuals, and concept attribution. The book concludes by showing how data effects interpretability and some of the pitfalls prevalent when using explainability methods. What You'll LearnUnderstand machine learning model interpretability Explore the different properties and selection requirements of various interpretability methodsReview the different types of interpretability methods used in real life by technical experts Interpret the output of various methods and understand the underlying problems Who This Book Is For Machine learning practitioners, data scientists and statisticians interested in making machine learning models interpretable and explainable; academic students pursuing courses of data science and business analytics. This book details the concepts of machine learning interpretability along with different types of explainability algorithms. -- Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning interpretability along with different types of explainability algorithms. You'll begin by reviewing the theoretical aspects of machine learning interpretability. In the first few sections you'll learn what interpretability is, what the common properties of interpretability methods are, the general taxonomy for classifying methods into different sections, and how the methods should be assessed in terms of human factors and technical requirements. Using a holistic approach featuring detailed examples, this book also includes quotes from actual business leaders and technical experts to showcase how real life users perceive interpretability and its related methods, goals, stages, and properties. Progressing through the book, you'll dive deep into the technical details of the interpretability domain. Starting off with the general frameworks of different types of methods, you'll use a data set to see how each method generates output with actual code and implementations. |
| Author | Pal, Aditya Kumar Nandi, Anirban |
| Author_xml | – sequence: 1 fullname: Anirban Nandi, Aditya Kumar Pal |
| BookMark | eNpNkUuP0zAUhY14iJnSH4DEIhs0YhHGz9hZDlWBSq3YIJZYdnIzDTF2sMMw8-9xmvKQF5bv-e7R9bmX6IkPHhB6SfBbgrG8rqUqSckVp6VUmJb8EbokgjIisiweo3UGyCxnlchnWaQ1YYQTIZ6jdUrfMMZUUk4ou0Bfd36COEaYen9bHExz7D0UezDRnwqhBZeKcqksz-Jvi7G966eHwvi22N6PzvT-T-kA0zG06QV62hmXYH2-V-jL--3nzcdy_-nDbnOzLw0hEquykbihspOsFmB4V1PVNa2loHhrTVU3hkGtBCZMWWm4VRbqqhPYVsChw51lK3S1GKehdy6FbtI2hCFRfi-1HVL-MxGqyjGt0JuFNGmAX-kY3JT0nYMTrv_LjvLMXp9dx5jjgLiYaoL1vIqZ1kTPvJ4bNP83xxjDj5-QJn0ybsBP0Ti9fbepFFM1wZl8dSYhOrgNZ2su8qZ4neXXizz4cAdO5wG-m_hwovQw7g77fHaM_QYWmJ3I |
| ContentType | eBook |
| Copyright | 2022 Anirban Nandi and Aditya Kumar Pal 2022 |
| Copyright_xml | – notice: 2022 – notice: Anirban Nandi and Aditya Kumar Pal 2022 |
| DBID | YSPEL |
| DEWEY | 006.31 |
| DOI | 10.1007/978-1-4842-7802-4 |
| DatabaseName | Perlego |
| DatabaseTitleList | |
| DeliveryMethod | fulltext_linktorsrc |
| Discipline | Computer Science |
| EISBN | 1523151005 9781523151004 9781484278024 148427802X |
| Edition | 1 |
| ExternalDocumentID | bks000158652 9781484278024 511613 EBC6838910 4514149 book_kpIMLMLMI3 |
| Genre | Electronic books |
| GroupedDBID | AABBV AADCS AALIM ACPMC ACXXF ALMA_UNASSIGNED_HOLDINGS BBABE CMZ CZZ IEZ K-E SBO TD3 YSPEL ACBYE AJIEK |
| ID | FETCH-LOGICAL-a11708-c70c27f7395ea4f928fcdb2e84dba69ca3e9850138b7a4b8be96f50b6e4ef0fb3 |
| IEDL.DBID | K-E |
| ISBN | 9781484278017 1484278011 9781484278024 148427802X |
| IngestDate | Tue Oct 28 12:08:39 EDT 2025 Thu Oct 16 03:12:46 EDT 2025 Tue Jul 29 20:26:29 EDT 2025 Mon Sep 22 05:20:32 EDT 2025 Tue Sep 30 15:19:04 EDT 2025 Sat Nov 23 13:54:33 EST 2024 |
| IsPeerReviewed | false |
| IsScholarly | false |
| LCCallNum | QA76.9.D343 .N36 2021 |
| LCCallNum_Ident | Q325.5-.7 |
| Language | English |
| LinkModel | DirectLink |
| MergedId | FETCHMERGED-LOGICAL-a11708-c70c27f7395ea4f928fcdb2e84dba69ca3e9850138b7a4b8be96f50b6e4ef0fb3 |
| OCLC | 1291314155 |
| PQID | EBC6838910 |
| PageCount | 1 online resource |
| ParticipantIDs | skillsoft_books24x7_bks000158652 askewsholts_vlebooks_9781484278024 springer_books_10_1007_978_1_4842_7802_4 proquest_ebookcentral_EBC6838910 perlego_books_4514149 knovel_primary_book_kpIMLMLMI3 |
| PublicationCentury | 2000 |
| PublicationDate | 2022 2021 20220101 2022-01-01 2021. |
| PublicationDateYYYYMMDD | 2022-01-01 2021-01-01 |
| PublicationDate_xml | – year: 2022 text: 2022 |
| PublicationDecade | 2020 |
| PublicationPlace | Berkeley, CA |
| PublicationPlace_xml | – name: Berkeley, CA – name: Place of publication not identified |
| PublicationYear | 2022 2021 |
| Publisher | Apress, an imprint of Springer Nature Apress Apress L. P |
| Publisher_xml | – name: Apress, an imprint of Springer Nature – name: Apress – name: Apress L. P |
| SSID | ssj0002724123 |
| Score | 2.315912 |
| Snippet | Understand model interpretability methods and apply the most suitable one for your machine learning project. This book details the concepts of machine learning... This book details the concepts of machine learning interpretability along with different types of explainability algorithms. -- |
| SourceID | skillsoft askewsholts springer proquest perlego knovel |
| SourceType | Aggregation Database Publisher |
| SubjectTerms | Computer Science Data mining General References Machine Learning Professional and Applied Computing Python Python (Computer program language) Software Engineering |
| SubjectTermsDisplay | Data mining. Electronic books. |
| TableOfContents | Title Page
Introduction
Table of Contents
1. The Evolution of Machine Learning
2. Introduction to Model Interpretability
3. Machine Learning Interpretability Taxonomy
4. Common Properties of Explanations Generated by Interpretability Methods
5. Human Factors in Model Interpretability
6. Explainability Facts: A Framework for Systematic Assessment of Explainable Approaches
7. Interpretable ML and Explainable ML Differences
8. The Framework of Model Explanations
9. Feature Importance Methods: Details and Usage Examples
10. Detailing Rule-Based Methods
11. Detailing Counterfactual Methods
12. Detailing Image Interpretability Methods
13. Explaining Text Classification Models
14. The Role of Data in Interpretability
15. The Eight Pitfalls of Explainability Methods
Conclusion
References
Index O3: System Interaction -- O4: Explanation Domain -- O5: Data and Model Transparency -- O6: Explanation Audience -- O7: Function of the Explanation -- O8: Causality vs. Actionability -- O9: Trust vs. Performance -- Usability Requirements -- U1: Soundness -- U2: Completeness -- U3: Contextfullness -- U4: Interactiveness -- U5: Actionability -- U6: Novelty -- U7: Complexity -- U8: Personalization -- Safety Requirements -- S1: Information Leakage -- S2: Explanation Misuse -- S3: Explanation Invariance -- Validation Requirements -- Summary -- Chapter 7: Interpretable ML and Explainable ML Differences -- Interpretable ML and Explainable ML Basics -- Analyzing the Decision Tree -- Digging Deeper -- Key Issues with Explainable ML -- Trade-offs Between Accuracy and Interpretability -- Beware of the Unfaithful -- Not Enough Detail -- Key Issues with Interpretable ML -- Profits vs. Losses -- Efforts to Construct -- Hidden Patterns -- Explanatory and Predictive Modeling -- Explaining or Predicting: The Key Differences Between Two Choices -- Validation, Model Evaluation, and Model Selection -- Validation -- Model Selection -- Model Use and Reporting Explanatory Models -- Summary -- Chapter 8: The Framework of Model Explanations -- Data Sets at a Glance -- Types of Frameworks for Tabular Data -- Feature Importance (FI) -- Predictive Power of Feature Subsets -- Additive Importance Measures -- Removal-based Explanations for Feature Importance -- Feature Removal -- Explaining Different Model Behaviors -- Summarizing Feature Influence -- Rule-based Explanations -- Prototypes -- Counterfactuals -- Explanations for Image Data -- Saliency Maps -- Concept Attribution -- Text Data -- Sentence Highlighting -- Attention-based Methods -- Summary -- Chapter 9: Feature Importance Methods: Details and Usage Examples -- Data Set Name -- Abstract -- Sources -- Data Set Information Human-Intelligibility of High-Dimensional IML Output Intro -- Table of Contents -- About the Authors -- About the Technical Reviewers -- Acknowledgments -- Introduction -- Chapter 1: The Evolution of Machine Learning -- Defining Machine Learning -- The Evolution of Machine Learning -- Learning a Machine Learning Algorithm -- Piece It Together -- Focus on Specific Algorithm Descriptions -- Design an Algorithm Description Template -- Start Small and Build It Up -- Investigating Machine Learning Algorithm Behavior -- Step 1. Select an Algorithm -- Step 2. Identify a Question -- Step 3. Design the Experiment -- Step 4. Execute the Experiment and Report Results -- Step 5. Repeat -- What Does Machine Learning Model Accuracy Mean? -- Why Model Accuracy Is Not Enough -- Summary -- Chapter 2: Introduction to Model Interpretability -- Humans Are Explanation Hungry -- Explanations in Machine Learning -- What Are Black-Box Models? -- What Is Interpretability? -- The Motivation Behind Interpretability -- To Make Better Decisions -- To Eliminate Bias -- To Justify Processes -- To Reproduce Operations -- Displacement Strategy -- To Determine Practical Accuracy -- To Maintain Privacy -- To Understand Security Risks -- The Research Behind Interpretability -- Summary -- Chapter 3: Machine Learning Interpretability Taxonomy -- Scope-related Types of Post hoc Model Interpretability -- Global Model Interpretability on a Holistic Level -- Local Model Interpretability -- A Group of Predictions -- Model-related Types of Post hoc Model Interpretability -- Result-related Types of Post hoc Model Interpretability -- Categorizing Common Classes of Explainability Methods -- Summary -- Chapter 4: Common Properties of Explanations Generated by Interpretability Methods -- Explanation Defined -- Properties of Explanation Methods -- Template of Expression -- Transparency -- Mobility -- Algorithmic Feasibility Properties of Individual Explanations -- Correctness -- Loyalty -- Dependability -- Resoluteness -- Lucidness -- Reliability -- Significance -- Originality -- Representativeness -- Human-Friendly Explanations -- Contrastiveness -- Selectivity -- Social -- Focus on the Abnormal -- Truthful -- Consistent with Prior Beliefs -- General and Probable -- Summary -- Chapter 5: Human Factors in Model Interpretability -- Interpretability Roles -- Technical Expertise Builders -- Domain Knowledge Reviewers -- Stakeholders or End Users -- Interpretability Stages -- Ideation and Conceptualization Stage -- Building and Validation Stage -- Deployment, Maintenance, and Use Stage -- Interpretability Goals -- Interpretability for Model Validation and Improvement -- Interpretability for Decision-Making and Knowledge Discovery -- Interpretability to Gain Confidence and Obtain Trust -- Human-Friendly Themes Characterizing Interpretability Work -- Interpretability Is Cooperative -- Interpretability Is Process -- Interpretability Is a Mental Model Comparison -- Interpretability Is Context-Dependent -- Design Opportunities for Interpretability Challenges -- Identifying, Representing, and Integrating Human Expectations -- Communicating and Summarizing Model Behavior -- Scalable and Integrable Interpretability Tools -- Post-Deployment Support -- Summary -- Chapter 6: Explainability Facts: A Framework for Systematic Assessment of Explainable Approaches -- Explainability Facts List Dimensions -- Functional Requirements -- F1: Problem Supervision Level -- F2: Problem Type -- F3: Explanation Target -- F4: Explanation Breadth/Scope -- F5: Computational Complexity -- F6: Applicable Model Class -- F7: Relation to the Predictive System -- F8: Compatible Feature Types -- F9: Caveats and Assumptions -- Operational Requirements -- O1: Explanation Family -- O2: Explanatory Medium Attribute Information -- Random Forest Feature Importance -- Accuracy-based Importance -- Gini-based Importance -- Permutation Feature Importance -- Advantages -- Disadvantages -- Code -- SHAP -- Property 1 (Local Accuracy) -- Property 2 (Missingness) -- Property 3 (Consistency) -- SAGE -- How SHAP and SAGE Are Related -- LIME -- FACET -- Model Inspection -- Model Simulation -- Enhanced Machine Learning Workflow -- Code -- Synergy -- Redundancy -- Partial Dependence Plots (PDP) -- Code -- Individual Conditional Expectation -- DALEX -- Introduction to Instance-level Exploration -- Breakdown Plots for Additive Attributions -- Breakdown Plots for Interactions -- Ceteris Paribus Profiles -- Local Diagnostics Plots -- Implementation Example of DALEX on the Titanic Data Set -- Create a Pipeline Model -- Predict-level Explanations -- predict -- predict_parts -- predict_profile -- Model-level Explanations -- model_performance -- model_parts -- model_profile -- Summary -- Chapter 10: Detailing Rule-Based Methods -- MAGIE (Model-Agnostic Global Interpretable Explanations) -- MAGIE Algorithm Approach -- Preprocessing the Input Data -- Generating Instance Level Conditions -- Learning Rules from Conditions -- Postprocessing Rules -- Sorting Rules by Mutual Information -- GLocaLX -- Local to Global Explanation Problem -- Local to Global Hierarchy of Explanation Theories -- Finding Similar Theories -- Code -- Output -- Skope-Rules -- Methodology -- Implementation -- Anchors -- Finding Anchors -- Advantages -- Disadvantages -- Getting an Anchor -- Summary -- Chapter 11: Detailing Counterfactual Methods -- Counterfactual Explanations -- Use Case 1: Banking Software -- Use Case 2: Continuous Outcome -- Counterfactual Explanations at a Glance -- Generating Counterfactual Explanations -- Counterfactual Guided by Prototypes -- DiCE MOC (Multi-Objective Counterfactuals) -- Comparison Between the Algorithms -- DiCE -- Diversity and Feasibility Constraints -- Proximity -- Sparsity -- Optimization -- Advantages -- Disadvantages -- Summary -- Chapter 12: Detailing Image Interpretability Methods -- Image Interpretation Using LIME -- Step 1. Generate Random Perturbations for Input Image -- Step 2. Predict Class for Perturbations -- Step 3. Compute Weights (Importance) For the Perturbations -- Step 4. Fit an Explainable Linear Model Using the Perturbations, Predictions, and Weights -- Image Interpretation Using Pixel Attribution (Saliency Maps) -- Image Interpretation Using Class Activation Maps -- Step 1. Modify the Model -- Step 2. Retrain the Model with CAMLogger Callback -- Step 3. Use CAMLogger to See the Class Activation Map -- Step 4. Draw Conclusions from the CAM -- Image Interpretation Using Gradient-Weighted Class Activation Maps -- Summary -- Chapter 13: Explaining Text Classification Models -- Data Preprocessing, Feature Engineering, and Logistic Regression Model on the Data -- Interpreting Text Predictions with LIME -- Interpreting Text Predictions with SHAP -- Explaining Text Models with Sentence Highlighting -- Summary -- Chapter 14: The Role of Data in Interpretability -- Summary -- Chapter 15: The Eight Pitfalls of Explainability Methods -- Assuming One-Fits-All Interpretability -- Bad Model Generalization -- Unnecessary Use of Complex Models -- Ignoring Feature Dependence -- Interpretation with Extrapolation -- Confusing Linear Correlation with General Dependence -- Misunderstanding Conditional Interpretation -- Misleading Interpretations Due to Feature Interactions -- Misleading Feature Effects Due to Aggregation -- Failing to Separate Main from Interaction Effects -- Ignoring Model and Approximation Uncertainty -- Failure to Scale to High-Dimensional Settings |
| Title | Interpreting Machine Learning Models - Learn Model Interpretability and Explainability Methods |
| URI | https://app.knovel.com/hotlink/toc/id:kpIMLMLMI3/interpreting-machine/interpreting-machine?kpromoter=Summon https://www.perlego.com/book/4514149/interpreting-machine-learning-models-learn-model-interpretability-and-explainability-methods-pdf https://ebookcentral.proquest.com/lib/[SITE_ID]/detail.action?docID=6838910 http://link.springer.com/10.1007/978-1-4842-7802-4 https://www.vlebooks.com/vleweb/product/openreader?id=none&isbn=9781484278024 |
| hasFullText | 1 |
| inHoldings | 1 |
| isFullTextHit | |
| isPrint | |
| link | http://utb.summon.serialssolutions.com/2.0.0/link/0/eLvHCXMwvR3LbtQwcNTHAXopjyLCYxUhDlzS3XWch7kgsWrVRYQToIoDlu3YZZWQrJq0Ar6Bj2ZsJ-zykLihXOLXyLEnM2PPC-ApLUsZzyWJDDM2hZmmkSgJvom5QP6QGamso3DxJj17R1-dJ-c7UI2-MDa5VdW017p2ZPpT21tF5rRv1XRVPq_Wy-I1Pst4uhpt8pDCR5-d4aH-a-WLau0M2xAx_MXSLuzPLVezGtxocyFDMmRmJHbOX7lNQIGYP8aEGsrZqBYdItPi6QtbImwiET2AA9FVSJmQavUdMjT_GShXr_VlrS_aX2TYG121qusOSe0fKljH2U4P4fu4Jt6gpTq-6uWx-vZbuMj_tGi3YF9b74vbsKObO3A45pwIBxJ0Fz4ut-CEhYcTDlFiscIm9unCyNf4YrgxrHSWwF9D0ZShNT50nmO-qnCZtLsjeH968nZxFg05IiJhc-bkkcpmimTG6hu1oIaR3KhSEp3TUoqUKRFrlidWHyszQWUuNUtNMpOpptrMjIzvwV7TNvo-hLlULCcyYYwyquxRK5lLVeo4YakqSxPAk60d5te102d3fAtFCA1g4reCr324EG478c0mBHA0IAT3wynKrHhODSAc0YM7wIN1Lj95uUhzq06eYZefaOMHE_ol47LqnEt8niYkgGcjNg3gx7jUOEs-53ae3E6U0wf_mulDuEmss4e7cHoEe_3llX6MIlgvJ7C7KD5M3A_0Ax4BNhg |
| linkProvider | Knovel |
| openUrl | ctx_ver=Z39.88-2004&ctx_enc=info%3Aofi%2Fenc%3AUTF-8&rfr_id=info%3Asid%2Fsummon.serialssolutions.com&rft_val_fmt=info%3Aofi%2Ffmt%3Akev%3Amtx%3Abook&rft.genre=book&rft.title=Interpreting+Machine+Learning+Models&rft.au=Nandi%2C+Anirban&rft.au=Pal%2C+Aditya+Kumar&rft.date=2021-01-01&rft.pub=Apress&rft.isbn=9781484278017&rft_id=info:doi/10.1007%2F978-1-4842-7802-4&rft.externalDocID=bks000158652 |
| thumbnail_l | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fwww.perlego.com%2Fbooks%2FRM_Books%2Fingram_csplus_gexhsuob%2F9781484278024.jpg |
| thumbnail_m | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fvle.dmmserver.com%2Fmedia%2F640%2F97814842%2F9781484278024.jpg |
| thumbnail_s | http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fcontent.knovel.com%2Fcontent%2FThumbs%2Fthumb14879.gif http://utb.summon.serialssolutions.com/2.0.0/image/custom?url=https%3A%2F%2Fmedia.springernature.com%2Fw306%2Fspringer-static%2Fcover-hires%2Fbook%2F978-1-4842-7802-4 |