Week 1 |
Introduction |
Doshi-Velez and Kim, 2017, Towards a Rigorous Science of Interpretable Machine Learning
Weller, 2019, Transparency: Motivation and Challenges
Lipton, 2017, The Mythos of Model Interpretability
[ Slides 1 | Slides 2 ]
|
Week 2 |
Human Factors in Explainability |
Hong et. al., 2020, Human Factors in Model Interpretability: Industry Practices, Challenges
Kaur et. al., 2020, Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning
Lage et. al., 2019, Human Evaluation of Models Built for Interpretability
Poursabzi-Sangdeh et. al., 2021, Measuring and Manipulating Model Interpretability
[ Slides 1 | Slides 2 ]
|
Week 3 |
Inherently Interpretable Models |
Letham and Rudin, 2015, Interpretable Classifiers Using Rules and Bayesian Analysis
Lakkaraju et. al., 2016, Interpretable Decision Sets
Caruana et. al., 2015, Intelligible Models for Healthcare
Li et. al., 2017, Deep Learning for Case Based Reasoning Through Prototypes
[ Slides 1 | Slides 2 ]
Additional Readings:
Ustun and Rudin, 2019, Learning Optimized Risk Scores
|
Week 4 |
Post hoc Explanations: Feature Attributions |
Ribeiro et. al., 2016, Why should I trust you? Explaining the Predictions of Any Classifier
Lundberg and Lee, 2017, A Unified Approach to Interpreting Models
Smilkov et. al., 2017, Smoothgrad: Removing noise by adding noise
Sundararajan et. al., 2017, Axiomatic Attribution for Deep Networks
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
Additional Readings:
Shrikumar et. al., 2019, Learning Important Features through Propagating Activation Differences
|
Week 5 |
Pitfalls, Challenges, and Evaluation of Feature Attributions |
Slack and Hilgard et. al., 2020, Fooling LIME and SHAP
Dombrowski et. al., 2019, Explanations can be manipulated and geometry is to blame
Adebayo et. al., 2018, Sanity Checks for Saliency Maps
Agarwal et. al., 2023, OpenXAI: Towards a Transparent Evaluation of Model Explanations
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
Additional Readings:
Krishna and Han et. al., 2023, The Disagreement Problem in Explainable Machine Learning
Rudin, 2019, Stop Explaining Black Box Models
Chen and Subhas et. al., 2022, What Makes a Good Explanation? A Harmonized View of Properties of Explanations
Chen et. al., 2022, Use-Case-Grounded Simulations for Explanation Evaluation
|
Week 6 |
Counterfactual Explanations (or) Algorithmic Recourse |
Wachter et. al., 2018, Counterfactual Explanations Without Opening the Black Box
Karimi et. al., 2020, Algorithmic Recourse: From Counterfactual Explanations to Interventions
Upadhyay et. al., 2021, Towards Robust and Reliable Algorithmic Recourse
Pawelczyk et. al., 2022, Probabilistically Robust Recourse: Navigating the Trade-offs between Costs and Robustness in Algorithmic Recourse
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
Additional Readings:
Pawelczyk et. al., 2020, Learning Model-Agnostic Counterfactual Explanations for Tabular Data
Rawal et. al., 2020, Beyond Individualized Recourse: Interpretable and Interactive Summaries of Actionable Recourses
Ustun et. al., 2019, Actionable Recourse in Linear Classification
|
Week 7 |
Attention and Concept Based Explanations |
Mullenbach et. al., 2018, Explainable Prediction of Medical Codes from Clinical Text
Jain and Wallace, 2019, Attention is not Explanation
Bau and Zhou et. al., 2017, Network Dissection: Quantifying Interpretability of Deep Visual Representations
Kim et. al., 2018, Interpretability Beyond Feature Attribution: Quantitative Testing with Concept Activation Vectors
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
|
Week 8 |
Data Attribution and Interactive Explanations |
Koh et. al., 2017, Understanding Black Box Predictions via Influence Functions
Ghorbani et. al., 2019, What is your data worth? Equitable Valuation of Data
Ghai et. al. 2020, Explainable Active Learning (XAL): Toward AI Explanations as Interfaces for Machine Teachers
Slack et. al., 2022, TalkToModel: Explaining Machine Learning Models with Interactive Natural Language Conversations
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
|
Week 9 |
Theory of Explainability and Interpreting Generative Models |
Covert et. al., 2021, Explaining by Removing: A Unified Framework for Model Explanation
Han et. al., 2022, Which Explanation Should I Choose? A Function Approximation Perspective to Characterizing Post hoc Explanations
Shen et. al., 2020, Interpreting the Latent Space of GANs for Semantic Face Editing
Harkonen et. al., 2020, GANSpace: Discovering Interpretable GAN Controls
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
Additional Readings:
Li et. al., 2021, A Learning Theoretic Perspective on Local Explainability
|
Week 10 |
Connections with Robustness, Privacy, Fairness, and Unlearning |
Shah et. al., 2021, Do Input Gradients Highlight Discriminative Features?
Pawelczyk et. al., 2023, On the Privacy Risks of Algorithmic Recourse
Begley et. al., 2020, Explainability for Fair Machine Learning
Krishna et. al., 2023, Towards Bridging the Gaps between the Right to Explanation and the Right to be Forgotten
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
Additional Readings:
Dai et. al., 2022, Fairness via Explanation Quality: Evaluating Disparities in the Quality of Post hoc Explanations
|
Week 11 |
Mechanistic Interpretability and Compiled Transformers |
Olah, 2020, An Introduction to Circuits
Olah, 2022, Mechanistic Interpretability, Variables, and the Importance of Interpretable Bases
Lindner et. al., 2023, Tracr: Compiled Transformers as a Laboratory for Interpretability
[ Slides 1 | Slides 2 ]
|
Week 12 |
Understanding and Reasoning in Large Language Models |
Wei et. al., 2022, Chain of Thought Prompting Elicits Reasoning in Large Language Models
Lampinen et. al., 2022, Can language models learn from explanations in context?
Rajani et. al., 2019, Explain Yourself! Leveraging Language Models for Common Sense Reasoning
Yin et. al., 2022, Interpreting Language Models with Contrastive Explanations
[ Slides 1 | Slides 2 | Slides 3 | Slides 4 ]
Additional Readings:
Bills et. al., 2023, Language Models can Explain Neurons in Language Models
|
Week 13 |
Understanding and Reasoning in Other Large Models |
McGrath et. al., 2022, Acquisition of Chess Knowledge in Alpha Zero
Tang et. al., 2022, What the DAAM: Interpreting Stable Diffusion Using Cross Attention
Cho et. al., 2022, DALL-EVAL: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Models
[ Slides 1 | Slides 2 | Slides 3 ]
|