Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.
Explainable AI
Surrogate Explainers
Critique
Paper
Description
Rudin (2019) argues that instead of bothering with explanations for black box models we should focus on designing inherently interpretable models. In her view the trade-off between (intrinsic) explainability and performance is not as clear-cut as people claim.
References
Rudin, Cynthia. 2019. βStop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.β Nature Machine Intelligence 1 (5): 206β15.