Explainable Artificial Intelligence

From Simple Predictors to Complex Generative Models

Spring 2023, Harvard University


Overview:

As machine learning models are increasingly being employed to aid critical decision making in high-stakes domains such as healthcare, finance, and law, it becomes important to ensure that relevant stakeholders are able to understand the behavior of these models. Such an understanding helps determine if, when, and how much to rely on the outputs generated by these models. This graduate level course aims to familiarize students with the recent advances in the emerging field of eXplainable Artificial Intelligence (XAI). In this course, we will review seminal position papers in the field, understand the notion of explainability from the perspective of different end users (e.g., doctors, ML researchers/engineers), discuss in detail different classes of interpretable models and post hoc explanations (e.g., rule-based and prototype-based models, feature attributions, counterfactual explanations, mechanistic interpretability), and explore the connections between explainability and fairness, robustness, and privacy. This course will also cover latest research on understanding large language models (e.g., GPT-3) and diffusion models (e.g., DALLE 2), and highlight the unique opportunities and challenges that arise when interpreting the behavior of such large generative models.

Prerequisites:

Students are expected to be fluent in basic linear algebra, probability, algorithms, and machine learning. Students are also expected to have programming and software engineering skills to work with data sets using Python, numpy, and sklearn.

Feedback:

Please use this form to provide any feedback and suggestions about the course.


Course Staff



Hima Lakkaraju

Hima Lakkaraju

Webpage | Twitter
Ike Lage

Jiaqi Ma

Webpage | Twitter
Ike Lage

Suraj Srinivas

Webpage | Twitter


Schedule



Week Topic Readings

Week 1

Introduction

Doshi-Velez and Kim, 2017, Towards a Rigorous Science of Interpretable Machine Learning

Weller, 2019, Transparency: Motivation and Challenges
Lipton, 2017, The Mythos of Model Interpretability

[ Slides 1 | Slides 2 ]


Week 2

Human Factors in Explainability

Hong et. al., 2020, Human Factors in Model Interpretability: Industry Practices, Challenges
Kaur et. al., 2020, Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning

Lage et. al., 2019, Human Evaluation of Models Built for Interpretability
Poursabzi-Sangdeh et. al., 2021, Measuring and Manipulating Model Interpretability

[ Slides 1 | Slides 2 ]


Week 3

Inherently Interpretable Models

Letham and Rudin, 2015, Interpretable Classifiers Using Rules and Bayesian Analysis
Lakkaraju et. al., 2016, Interpretable Decision Sets

Caruana et. al., 2015, Intelligible Models for Healthcare
Li et. al., 2017, Deep Learning for Case Based Reasoning Through Prototypes

[ Slides 1 | Slides 2 ]

Additional Readings:
Ustun and Rudin, 2019, Learning Optimized Risk Scores


Week 4

Post hoc Explanations: Feature Attributions

Ribeiro et. al., 2016, Why should I trust you? Explaining the Predictions of Any Classifier
Lundberg and Lee, 2017, A Unified Approach to Interpreting Models

Smilkov et. al., 2017, Smoothgrad: Removing noise by adding noise
Sundararajan et. al., 2017, Axiomatic Attribution for Deep Networks

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]

Additional Readings:
Shrikumar et. al., 2019, Learning Important Features through Propagating Activation Differences


Week 5

Pitfalls, Challenges, and Evaluation of Feature Attributions

Slack and Hilgard et. al., 2020, Fooling LIME and SHAP
Dombrowski et. al., 2019, Explanations can be manipulated and geometry is to blame

Adebayo et. al., 2018, Sanity Checks for Saliency Maps
Agarwal et. al., 2023, OpenXAI: Towards a Transparent Evaluation of Model Explanations

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]

Additional Readings:
Krishna and Han et. al., 2023, The Disagreement Problem in Explainable Machine Learning
Rudin, 2019, Stop Explaining Black Box Models
Chen and Subhas et. al., 2022, What Makes a Good Explanation? A Harmonized View of Properties of Explanations
Chen et. al., 2022, Use-Case-Grounded Simulations for Explanation Evaluation


Week 6

Counterfactual Explanations (or) Algorithmic Recourse

Wachter et. al., 2018, Counterfactual Explanations Without Opening the Black Box
Karimi et. al., 2020, Algorithmic Recourse: From Counterfactual Explanations to Interventions

Upadhyay et. al., 2021, Towards Robust and Reliable Algorithmic Recourse
Pawelczyk et. al., 2022, Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]

Additional Readings:
Pawelczyk et. al., 2020, Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Rawal et. al., 2020, Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Ustun et. al., 2019, Actionable Recourse in Linear Classification


Week 7

Attention and Concept Based Explanations

Mullenbach et. al., 2018, Explainable Prediction of Medical Codes from Clinical Text
Jain and Wallace, 2019, Attention is not Explanation

Bau and Zhou et. al., 2017, Network Dissection: Quantifying Interpretability of Deep Visual Representations
Kim et. al., 2018, Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]


Week 8

Data Attribution and Interactive Explanations

Koh et. al., 2017, Understanding Black Box Predictions via Influence Functions
Ghorbani et. al., 2019, What is your data worth? Equitable Valuation of Data

Ghai et. al. 2020, Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers
Slack et. al., 2022, TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]


Week 9

Theory of Explainability and Interpreting Generative Models

Covert et. al., 2021, Explaining by Removing: A Unified Framework for Model Explanation
Han et. al., 2022, Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations

Shen et. al., 2020, Interpreting the Latent Space of GANs for Semantic Face Editing
Harkonen et. al., 2020, GANSpace: Discovering Interpretable GAN Controls

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]

Additional Readings:
Li et. al., 2021, A Learning Theoretic Perspective on Local Explainability


Week 10

Connections with Robustness, Privacy, Fairness, and Unlearning

Shah et. al., 2021, Do Input Gradients Highlight Discriminative Features?
Pawelczyk et. al., 2023, On the Privacy Risks of Algorithmic Recourse

Begley et. al., 2020, Explainability for Fair Machine Learning
Krishna et. al., 2023, Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]

Additional Readings:
Dai et. al., 2022, Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations


Week 11

Mechanistic Interpretability and Compiled Transformers

Olah, 2020, An Introduction to Circuits
Olah, 2022, Mechanistic Interpretability, Variables, and the Importance of Interpretable Bases

Lindner et. al., 2023, Tracr: Compiled Transformers as a Laboratory for Interpretability

[ Slides 1 | Slides 2 ]


Week 12

Understanding and Reasoning in Large Language Models

Wei et. al., 2022, Chain of Thought Prompting Elicits Reasoning in Large Language Models
Lampinen et. al., 2022, Can language models learn from explanations in context?

Rajani et. al., 2019, Explain Yourself! Leveraging Language Models for Common Sense Reasoning
Yin et. al., 2022, Interpreting Language Models with Contrastive Explanations

[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]

Additional Readings:
Bills et. al., 2023, Language Models can Explain Neurons in Language Models


Week 13

Understanding and Reasoning in Other Large Models

McGrath et. al., 2022, Acquisition of Chess Knowledge in Alpha Zero

Tang et. al., 2022, What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Cho et. al., 2022, DALL-EVAL: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Models

[ Slides 1 | Slides 2 | Slides 3 ]







    © Hima Lakkaraju 2023