
Talk Title
Robustness Issues in Counterfactual Explanations for Deep Learning Models.
Abstract
Counterfactual explanations (CEs) are advocated as being ideally suited to providing algorithmic recourse for subjects affected by the predictions of machine learning models. While CEs can be beneficial to affected individuals, recent work has exposed severe issues related to the robustness of state-of-the-art methods for obtaining CEs. Since a lack of robustness may compromise the validity of CEs, techniques to mitigate this risk are in order. In this talk we will begin by introducing the problem of (lack of) robustness and discuss its implications on fairness. We will then present some recent solutions we developed to compute CXs with robustness guarantees.
Speaker Bio – Dr Francesco Leofante
Francesco is an Imperial College Research Fellow affiliated with the Centre for Explainable Artificial Intelligence at Imperial College London. His research focuses on safe and explainable AI, with special emphasis on counterfactual explanations and their robustness. Since 2022, he leads the project “ConTrust: Robust Contrastive Explanations for Deep Neural Networks”, a four year effort devoted to the formal study of robustness issues arising in XAI. More details about Francesco and his research can be found at fraleo.github.
Time: 14.00 – 15.00
Date: Tuesday 9 April
Location: Hybrid Event | I-X Conference Room, Level 5
Translation and Innovation Hub (I-HUB)
Imperial White City Campus
84 Wood Lane
W12 0BZ
Link to join online via Teams.
Any questions, please contact Andreas Joergensen (a.joergensen@imperial.ac.uk).