We are immensely grateful to the group of TU Delft students who contributed huge improvements to this package as part of a university project in 2023: Rauno Arike, Simon Kasdorp, Lauri Kesküll, Mariusz Kicior, Vincent Pikand. We also want to thank the broader Julia community for being welcoming and open and for supporting research contributions like this one. Some of the members of TU Delft were partially funded by ICAI AI for Fintech Research, an ING—TU Delft collaboration.
Altmeyer, Patrick, Giovan Angela, Aleksander Buszydlik, Karol Dobiczek, Arie van Deursen, and Cynthia C. S. Liem. 2023.
“Endogenous Macrodynamics in Algorithmic Recourse.” IEEE.
https://doi.org/10.1109/satml54575.2023.00036.
Altmeyer, Patrick, Arie van Deursen, and Cynthia C. S. Liem. 2023.
“Explaining Black-Box Models through Counterfactuals.” In
Proceedings of the JuliaCon Conferences, 1:130.
https://doi.org/10.21105/jcon.00130.
Antorán, Javier, Umang Bhatt, Tameem Adel, Adrian Weller, and José Miguel Hernández-Lobato. 2020.
“Getting a Clue: A Method for Explaining Uncertainty Estimates.” https://arxiv.org/abs/2006.06848.
Arrieta, Alejandro Barredo, Natalia Diaz-Rodriguez, Javier Del Ser, Adrien Bennetot, Siham Tabik, Alberto Barbado, Salvador Garcia, et al. 2020.
“Explainable Artificial Intelligence (XAI): Concepts, Taxonomies, Opportunities and Challenges Toward Responsible AI.” Information Fusion 58: 82–115.
https://doi.org/10.1016/j.inffus.2019.12.012.
Barry Becker, Ronny Kohavi. 1996.
“Adult.” UCI Machine Learning Repository.
https://doi.org/10.24432/C5XW20.
Blaom, Anthony D., Franz Kiraly, Thibaut Lienart, Yiannis Simillides, Diego Arenas, and Sebastian J. Vollmer. 2020.
“MLJ: A Julia Package for Composable Machine Learning.” Journal of Open Source Software 5 (55): 2704.
https://doi.org/10.21105/joss.02704.
Borch, Christian. 2022.
“Machine Learning, Knowledge Risk, and Principal-Agent Problems in Automated Trading.” Technology in Society, 101852.
https://doi.org/10.1016/j.techsoc.2021.101852.
Buolamwini, Joy, and Timnit Gebru. 2018. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, 77–91. PMLR.
Dandl, Susanne, Andreas Hofheinz, Martin Binder, Bernd Bischl, and Giuseppe Casalicchio. 2023.
“Counterfactuals: An R Package for Counterfactual Explanation Methods.” arXiv.
http://arxiv.org/abs/2304.06569.
Delaney, Eoin, Derek Greene, and Mark T. Keane. 2021.
“Uncertainty Estimation and Out-of-Distribution Detection for Counterfactual Explanations: Pitfalls and Solutions.” arXiv.
http://arxiv.org/abs/2107.09734.
Fan, Fenglei, Jinjun Xiong, and Ge Wang. 2020.
“On Interpretability of Artificial Neural Networks.” https://arxiv.org/abs/2001.02522.
Goodfellow, Ian, Jonathon Shlens, and Christian Szegedy. 2015.
“Explaining and Harnessing Adversarial Examples.” https://arxiv.org/abs/1412.6572.
Innes, Mike. 2018.
“Flux: Elegant Machine Learning with Julia.” Journal of Open Source Software 3 (25): 602.
https://doi.org/10.21105/joss.00602.
Joshi, Shalmali, Oluwasanmi Koyejo, Warut Vijitbenjaronk, Been Kim, and Joydeep Ghosh. 2019.
“Towards Realistic Individual Recourse and Actionable Explanations in Black-Box Decision Making Systems.” https://arxiv.org/abs/1907.09615.
Kaggle. 2011.
“Give Me Some Credit, Improve on the State of the Art in Credit Scoring by Predicting the Probability That Somebody Will Experience Financial Distress in the Next Two Years.” https://www.kaggle.com/c/GiveMeSomeCredit;
Kaggle.
https://www.kaggle.com/c/GiveMeSomeCredit.
Karimi, Amir-Hossein, Gilles Barthe, Bernhard Schölkopf, and Isabel Valera. 2021.
“A Survey of Algorithmic Recourse: Definitions, Formulations, Solutions, and Prospects.” https://arxiv.org/abs/2010.04050.
Karimi, Amir-Hossein, Bernhard Schölkopf, and Isabel Valera. 2021.
“Algorithmic Recourse: From Counterfactual Explanations to Interventions.” In
Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency, 353–62. FAccT ’21. New York, NY, USA: Association for Computing Machinery.
https://doi.org/10.1145/3442188.3445899.
Karimi, Amir-Hossein, Julius Von Kügelgen, Bernhard Schölkopf, and Isabel Valera. 2020.
“Algorithmic Recourse Under Imperfect Causal Knowledge: A Probabilistic Approach.” https://arxiv.org/abs/2006.06831.
Kaur, Harmanpreet, Harsha Nori, Samuel Jenkins, Rich Caruana, Hanna Wallach, and Jennifer Wortman Vaughan. 2020.
“Interpreting Interpretability: Understanding Data Scientists’ Use of Interpretability Tools for Machine Learning.” In
Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14.
https://doi.org/10.1145/3313831.3376219.
Lakshminarayanan, Balaji, Alexander Pritzel, and Charles Blundell. 2017. “Simple and Scalable Predictive Uncertainty Estimation Using Deep Ensembles.” In Proceedings of the 31st International Conference on Neural Information Processing Systems, 6405–16. NIPS’17. Red Hook, NY, USA: Curran Associates Inc.
Laugel, Thibault, Marie-Jeanne Lesot, Christophe Marsala, Xavier Renard, and Marcin Detyniecki. 2017.
“Inverse Classification for Comparison-Based Interpretability in Machine Learning.” arXiv.
https://doi.org/10.48550/arXiv.1712.08443.
LeCun, Yann, Léon Bottou, Yoshua Bengio, and Patrick Haffner. 1998. “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE 86 (11): 2278–2324.
Miller, Tim. 2019.
“Explanation in Artificial Intelligence: Insights from the Social Sciences.” Artificial Intelligence 267: 1–38.
https://doi.org/10.1016/j.artint.2018.07.007.
Molnar, Christoph. 2022.
Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2nd ed.
https://christophm.github.io/interpretable-ml-book.
Mothilal, Ramaravind K, Amit Sharma, and Chenhao Tan. 2020.
“Explaining Machine Learning Classifiers Through Diverse Counterfactual Explanations.” In
Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 607–17.
https://doi.org/10.1145/3351095.3372850.
O’Neil, Cathy. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown.
Pace, R Kelley, and Ronald Barry. 1997.
“Sparse Spatial Autoregressions.” Statistics & Probability Letters 33 (3): 291–97.
https://doi.org/10.1016/s0167-7152(96)00140-x.
Pawelczyk, Martin, Sascha Bielawski, Johannes van den Heuvel, Tobias Richter, and Gjergji Kasneci. 2021.
“CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms.” https://arxiv.org/abs/2108.00783.
Pawelczyk, Martin, Teresa Datta, Johannes van-den-Heuvel, Gjergji Kasneci, and Himabindu Lakkaraju. 2023.
“Probabilistically Robust Recourse: Navigating the Trade-Offs Between Costs and Robustness in Algorithmic Recourse.” https://arxiv.org/abs/2203.06768.
Poyiadzi, Rafael, Kacper Sokol, Raul Santos-Rodriguez, Tijl De Bie, and Peter Flach. 2020. “FACE: Feasible and Actionable Counterfactual Explanations.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 344–50.
Rudin, Cynthia. 2019.
“Stop Explaining Black Box Machine Learning Models for High Stakes Decisions and Use Interpretable Models Instead.” Nature Machine Intelligence 1 (5): 206–15.
https://doi.org/10.1038/s42256-019-0048-x.
Schut, Lisa, Oscar Key, Rory McGrath, Luca Costabello, Bogdan Sacaleanu, Yarin Gal, et al. 2021. “Generating Interpretable Counterfactual Explanations By Implicit Minimisation of Epistemic and Aleatoric Uncertainties.” In International Conference on Artificial Intelligence and Statistics, 1756–64. PMLR.
Slack, Dylan, Anna Hilgard, Himabindu Lakkaraju, and Sameer Singh. 2021. “Counterfactual Explanations Can Be Manipulated.” Advances in Neural Information Processing Systems 34.
Slack, Dylan, Sophie Hilgard, Emily Jia, Sameer Singh, and Himabindu Lakkaraju. 2020. “Fooling Lime and Shap: Adversarial Attacks on Post Hoc Explanation Methods.” In Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 180–86.
Tolomei, Gabriele, Fabrizio Silvestri, Andrew Haines, and Mounia Lalmas. 2017.
“Interpretable Predictions of Tree-Based Ensembles via Actionable Feature Tweaking.” In
Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 465–74.
https://doi.org/10.1145/3097983.3098039.
Upadhyay, Sohini, Shalmali Joshi, and Himabindu Lakkaraju. 2021. “Towards Robust and Reliable Algorithmic Recourse.” Advances in Neural Information Processing Systems 34: 16926–37.
Ustun, Berk, Alexander Spangher, and Yang Liu. 2019.
“Actionable Recourse in Linear Classification.” In
Proceedings of the Conference on Fairness, Accountability, and Transparency, 10–19.
https://doi.org/10.1145/3287560.3287566.
Varshney, Kush R. 2022. Trustworthy Machine Learning. Chappaqua, NY, USA: Independently Published.
Verma, Sahil, Varich Boonsanong, Minh Hoang, Keegan E. Hines, John P. Dickerson, and Chirag Shah. 2022.
“Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review.” https://arxiv.org/abs/2010.10596.
Wachter, Sandra, Brent Mittelstadt, and Chris Russell. 2017.
“Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR.” Harv. JL & Tech. 31: 841.
https://doi.org/10.2139/ssrn.3063289.
Xiao, Han, Kashif Rasul, and Roland Vollgraf. 2017.
“Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms.” https://arxiv.org/abs/1708.07747.
Yeh, I-Cheng, and Che-hui Lien. 2009.
“The Comparisons of Data Mining Techniques for the Predictive Accuracy of Probability of Default of Credit Card Clients.” Expert Systems with Applications 36 (2): 2473–80.
https://doi.org/10.1016/j.eswa.2007.12.020.