I am a researcher affiliated with the Centre for Explainable AI at Imperial College. My research largely focuses on safe & explainable AI, with special emphasis on contrastive explanations and their robustness. My work is currently supported by an Imperial College Research Fellowship.
Before, I was research associate in the Verification of Autonomous Systems group at Imperial College. I obtained a PhD in Computer Science from RWTH Aachen University and UNIGE with a thesis on AI Planning.
You might want to check out my CV to get a better picture of what I’ve done professionally so far.
Leofante F, Potyka N, Promoting Counterfactual Robustness through Diversity, The Thirty-Eighth AAAI Conference on Artificial Intelligence (AAAI24)
et al., Recourse under model multiplicity via argumentative ensembling, The 23rd International Conference on Autonomous Agents and Multi-Agent Systems, ACM
et al., Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation, The 15th Asian Conference on Machine Learning
Leofante F, Lomuscio A, 2023, Robust explanations for human-neural multi-agent systems with formal verification, The 20th European Conference on Multi-Agent Systems (EUMAS 2023), Springer, Pages:244-262, ISSN:1611-3349
et al., 2023, Verification of semantic key point detection for aircraft pose estimation, The 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), IJCAI Organization, Pages:757-762, ISSN:2334-1033