I am a researcher affiliated with the Centre for Explainable AI at Imperial College. My research largely focuses on safe & explainable AI, with special emphasis on contrastive explanations and their robustness. My work is currently supported by an Imperial College Research Fellowship.
Before, I was research associate in the Verification of Autonomous Systems group at Imperial College. I obtained a PhD in Computer Science from RWTH Aachen University and UNIGE with a thesis on AI Planning.
You might want to check out my CV to get a better picture of what I’ve done professionally so far.
Leofante F, Lomuscio A, Towards robust contrastive explanations for human-neural multi-agent systems, International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), ACM
et al., Formalising the robustness of counterfactual explanations for neural networks, The 37th AAAI Conference on Artificial Intelligence, Association for the Advancement of Artificial Intelligence
et al., 2022, Robot swarms as hybrid systems: modelling and verification, Open Publishing Association, Pages:61-77, ISSN:2075-2180
Henriksen P, Leofante F, Lomuscio A, 2022, Repairing misclassifications in neural networks using limited data, SAC '22, Pages:1031-1038
et al., 2021, Formal analysis of neural network-based systems in the aircraft domain, International Symposium on Formal Methods, Springer International Publishing, Pages:730-740, ISSN:0302-9743