Imperial College London

ProfessorFrancescaToni

Faculty of EngineeringDepartment of Computing

Professor in Computational Logic
 
 
 
//

Contact

 

+44 (0)20 7594 8228f.toni Website

 
 
//

Location

 

430Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

435 results found

Nguyen H-T, Goebel R, Toni F, Stathis K, Satoh Ket al., 2023, How well do SOTA legal reasoning models support abductive reasoning?, Logic Programming and Legal Reasoning Workshop@ICLP2023

We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductivereasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulatedfrom a set of observations, and that hypothesis is used to explain the observations. The ability toformulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logicalarguments, interpret laws, and develop legal theories. Our motivation is to consider the belief thatdeep learning models, especially large language models (LLMs), will soon replace lawyers because theyperform well on tasks related to legal text processing. But to do so, we believe, requires some form ofabductive hypothesis formation. In other words, while LLMs become more popular and powerful, wewant to investigate their capacity for abductive reasoning. To pursue this goal, we start by building alogic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate theperformance of a SOTA model in the legal field. Our experimental results show that although thesemodels can perform well on tasks related to some aspects of legal text processing, they still fall short insupporting abductive reasoning tasks.

Conference paper

Nguyen H-T, Toni F, Stathis K, Satoh Ket al., 2023, Neurosymbolic approaches for legal reasoning, Logic Programming and Legal Reasoning Workshop@ICLP2023

Logic programming has long being advocated for legal reasoning, and several approaches have been putforward relying upon explicit representation of the law in logic programming terms. In this positionpaper we focus on the PROLEG logic-programming-based framework for formalizing and reasoningwith Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunitiesin leveraging deep learning techniques for improving legal reasoning using PROLEG, identifying fourdistinct options ranging from enhancing fact extraction using deep learning to end-to-end solutionsfor reasoning with textual legal descriptions. We assess advantages and limitations of each option,considering their technical feasibility, interpretability, and alignment with the needs of legal practitionersand decision-makers. We believe that our analysis can serve as a guideline for developers aiming tobuild effective decision-support systems for the legal domain, while fostering a deeper understanding ofchallenges and potential advancements by neuro-symbolic approaches in legal applications.

Conference paper

Lertvittayakumjorn P, Toni F, 2023, Argumentative explanations for pattern-based text classifiers, Argument and Computation, Vol: 14, Pages: 163-234, ISSN: 1946-2174

Recent works in Explainable AI mostly address the transparency issue of black-box models or create explanations for any kind of models (i.e., they are model-agnostic), while leaving explanations of interpretable models largely underexplored. In this paper, we fill this gap by focusing on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification. We do so because, albeit interpretable, PLR is challenging when it comes to explanations. In particular, we found that a standard way to extract explanations from this model does not consider relations among the features, making the explanations hardly plausible to humans. Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features. Specifically, we use computational argumentation as follows: we see features (patterns) in PLR as arguments in a form of quantified bipolar argumentation frameworks (QBAFs) and extract attacks and supports between arguments based on specificity of the arguments; we understand logistic regression as a gradual semantics for these QBAFs, used to determine the arguments’ dialectic strength; and we study standard properties of gradual semantics for QBAFs in the context of our argumentative re-interpretation of PLR, sanctioning its suitability for explanatory purposes. We then show how to extract intuitive explanations (for outputs computed by PLR) from the constructed QBAFs. Finally, we conduct an empirical evaluation and two experiments in the context of human-AI collaboration to demonstrate the advantages of our resulting AXPLR method.

Journal article

Vasileiou SL, Kumar A, Yeoh W, Son TC, Toni Fet al., 2023, DR-HAI: argumentation-based dialectical reconciliation in human-AI interactions, IJCAI 2023

In this paper, we introduce DR-HAI – a novelargumentation-based framework designed to extend model reconciliation approaches, commonlyused in explainable AI planning, for enhancedhuman-AI interaction. By adopting a multi-shotreconciliation paradigm and not assuming a-prioriknowledge of the human user’s model, DR-HAI enables interactive reconciliation to address knowledge discrepancies between an explainer and an explainee. We formally describe the operational semantics of DR-HAI, and provide theoretical guarantees related to termination and success

Conference paper

Rago A, Russo F, Albini E, Toni F, Baroni Pet al., 2023, Explaining classifiers’ outputs with causal models and argumentation, Journal of Applied Logics, Vol: 10, Pages: 421-449, ISSN: 2631-9810

We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for mod-els’ outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the ex-tracted bipolar AFs may be used as relation-based explanations for the outputs of causal models. We then evaluate our method empirically when the causal models represent (Bayesian and neural network) machine learning models for classification. The results show advantages over a popular approach from the literature, both in highlighting specific relationships between feature and classification variables and in generating counterfactual explanations with respect to a commonly used metric.

Journal article

Santhirasekaram A, Kori A, Winkler M, Rockall A, Toni F, Glocker Bet al., 2023, Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification, Computer Vision and Pattern Recognition

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2023, Achieving descriptive accuracy in explanations via argumentation: the case of probabilistic classifiers, Frontiers in Artificial Intelligence, Vol: 6, Pages: 1-18, ISSN: 2624-8212

The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.

Journal article

Ward FR, Toni F, Belardinelli F, 2023, Defining Deception in Structural Causal Games, Pages: 2902-2904, ISSN: 1548-8403

Deceptive agents are a challenge for the safety, trustworthiness, and cooperation of AI systems. We focus on the problem that agents might deceive in order to achieve their goals. There are a number of existing definitions of deception in the literature on game theory and symbolic AI, but there is no overarching theory of deception for learning agents in games. We introduce a functional definition of deception in structural causal games, grounded in the philosophical literature. We present several examples to establish that our formal definition captures philosophical desiderata for deception.

Conference paper

Nguyen HT, Toni F, Stathis K, Satoh Ket al., 2023, Beyond Logic Programming for Legal Reasoning, ISSN: 1613-0073

Logic programming has long being advocated for legal reasoning, and several approaches have been put forward relying upon explicit representation of the law in logic programming terms. In this position paper we focus on the PROLEG logic-programming-based framework for formalizing and reasoning with Japanese presupposed ultimate fact theory. Specifically, we examine challenges and opportunities in leveraging deep learning techniques for improving legal reasoning using PROLEG, identifying four distinct options ranging from enhancing fact extraction using deep learning to end-to-end solutions for reasoning with textual legal descriptions. We assess advantages and limitations of each option, considering their technical feasibility, interpretability, and alignment with the needs of legal practitioners and decision-makers. We believe that our analysis can serve as a guideline for developers aiming to build effective decision-support systems for the legal domain, while fostering a deeper understanding of challenges and potential advancements by neuro-symbolic approaches in legal applications.

Conference paper

Yin X, Potyka N, Toni F, 2023, Argument Attribution Explanations in Quantitative Bipolar Argumentation Frameworks., Publisher: IOS Press, Pages: 2898-2905

Conference paper

Ayoobi H, Potyka N, Toni F, 2023, SpArX: Sparse Argumentative Explanations for Neural Networks., Publisher: IOS Press, Pages: 149-156

Conference paper

Rago A, Li H, Toni F, 2023, Interactive Explanations by Conflict Resolution via Argumentative Exchanges., Pages: 582-592

Conference paper

Toni F, 2023, Knowledge Representation and Reasoning in the Time of Data-Centric AI (Abstract of Invited Talk)., Publisher: CEUR-WS.org

Conference paper

, 2023, Proceedings 39th International Conference on Logic Programming, ICLP 2023, Imperial College London, UK, 9th July 2023 - 15th July 2023.

Conference paper

Russo F, Toni F, 2023, Shapley-PC: Constraint-based Causal Structure Learning with Shapley Values., CoRR, Vol: abs/2312.11582

Journal article

Jiang J, Rago A, Leofante F, Toni Fet al., 2023, Recourse under Model Multiplicity via Argumentative Ensembling (Technical Report)., CoRR, Vol: abs/2312.15097

Journal article

Ward F, Toni F, Belardinelli F, Everitt Tet al., 2023, Honesty Is the Best Policy: Defining and Mitigating AI Deception.

Conference paper

Nguyen HT, Goebel R, Toni F, Stathis K, Satoh Ket al., 2023, LawGiBa – Combining GPT, knowledge bases, and logic programming in a legal assistance system, JURIX 2023: The Thirty-sixth Annual Conference, Maastricht, the Netherlands, 18–20 December 2023, Publisher: IOS Press, Pages: 371-374, ISSN: 0922-6389

We present LawGiBa, a proof-of-concept demonstration system for legal assistance that combines GPT, legal knowledge bases, and Prolog’s logic programming structure to provide explanations for legal queries. This novel combination effectively and feasibly addresses the hallucination issue of large language models (LLMs) in critical domains, such as law. Through this system, we demonstrate how incorporating a legal knowledge base and logical reasoning can enhance the accuracy and reliability of legal advice provided by AI models like GPT. Though our work is primarily a demonstration, it provides a framework to explore how knowledge bases and logic programming structures can be further integrated with generative AI systems, to achieve improved results across various natural languages and legal systems.

Conference paper

Zhang K, Toni F, Williams M, 2022, A federated cox model with non-proportional hazards, The 6th International Workshop on ​Health Intelligence, Publisher: Springer, Pages: 171-185, ISSN: 1860-949X

Recent research has shown the potential for neural networksto improve upon classical survival models such as the Cox model, whichis widely used in clinical practice. Neural networks, however, typicallyrely on data that are centrally available, whereas healthcare data arefrequently held in secure silos. We present a federated Cox model thataccommodates this data setting and also relaxes the proportional hazardsassumption, allowing time-varying covariate effects. In this latter respect,our model does not require explicit specification of the time-varying ef-fects, reducing upfront organisational costs compared to previous works.We experiment with publicly available clinical datasets and demonstratethat the federated model is able to perform as well as a standard model.

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2022, Descriptive accuracy in explanations: the case of probabilistic classifiers, 15th International Conference on Scalable Uncertainty Management (SUM 2022), Publisher: Springer, Pages: 279-294

A user receiving an explanation for outcomes produced by an artificially intelligent system expects that it satisfies the key property of descriptive accuracy (DA), i.e. that the explanation contents are in correspondence with the internal working of the system. Crucial as this property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature and a novel form of explanation that we propose and complement our analysis with experiments carried out on a varied selection of concrete probabilistic classifiers.

Conference paper

Maurizio P, Toni F, 2022, Learning assumption-based argumentation frameworks, 31st International Conference on Inductive Logic Programming (ILP 2022)

. We propose a novel approach to logic-based learning whichgenerates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. TheseABA frameworks can be mapped onto logic programs with negationas failure that may be non-stratified. Whereas existing argumentationbased methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformationrules, including some adapted from logic program transformation rules(notably folding) as well as others, such as rote learning and assumptionintroduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we alsopropose a variant that handles the non-stratified case. We illustrate thebenefits of our approach with a number of examples, which show that,on one hand, we are able to easily reconstruct other logic-based learningapproaches and, on the other hand, we can work out in a very simpleand natural way problems that seem to be hard for existing techniques.

Conference paper

Potyka N, Yin X, Toni F, 2022, On the tradeoff between correctness and completeness in argumentative explainable AI, 1st International Workshop on Argumentation for eXplainable AI, Publisher: CEUR Workshop Proceedings, Pages: 1-8, ISSN: 1613-0073

Explainable AI aims at making the decisions of autonomous systems human-understandable. Argumentation frameworks are a natural tool for this purpose. Among them, bipolar abstract argumentation frameworks seem well suited to explain the effect of features on a classification decision and their formal properties can potentially be used to derive formal guarantees for explanations. Two particular interesting properties are correctness (if the explanation says that X affects Y, then X affects Y ) and completeness (if X affects Y, then the explanation says that X affects Y ). The reinforcement property of bipolar argumentation frameworks has been used as a natural correctness counterpart in previous work. Applied to the classification context, it basically states that attacking features should decrease and supporting features should increase the confidence of a classifier. In this short discussion paper, we revisit this idea, discuss potential limitations when considering reinforcement without a corresponding completeness property and how these limitations can potentially be overcome.

Conference paper

Paulino-Passos G, Toni F, 2022, On monotonicity of dispute trees as explanations for case-based reasoning with abstract argumentation, 1st International Workshop on Argumentation for eXplainable AI co-located with 9th International Conference on Computational Models of Argument (COMMA 2022), Publisher: CEUR Workshop Proceedings, Pages: 1-12, ISSN: 1613-0073

Recent work on explainability raises the question of what different types of explanations actually mean. One idea is that explanations can reveal information about the behaviour of the model on a subset of the input space. When this way of interpreting explanations is thought as an interactive process, inferences from explanations can be seen as a form of reasoning. In the case of case-based reasoning with abstract argumentation (AA-CBR), previous work has used arbitrated dispute trees as a methodology for explanation. Those are dispute trees where nodes are seen as losing or winning depending on the outcome for the new case under consideration. In this work we show how arbitrated dispute trees can be readapted for different inputs, which allows a broader interpretation of them, capturing more of the input-output behaviour of the model. We show this readaptation is correct by construction, and thus the resulting reasoning based on this reuse is monotonic and thus necessarily a faithful explanation.

Conference paper

Toni F, Polberg S, Booth R, Caminada M, Kido Het al., 2022, Preface, ISBN: 9781643683065

Book

Ward F, Toni F, Belardinelli F, 2022, A causal perspective on AI deception in games, AI Safety 2022 (IJCAI-ECAI-22), Publisher: CEUR Workshop Proceedings, Pages: 1-16

Deception is a core challenge for AI safety and we focus on the problem that AI agents might learndeceptive strategies in pursuit of their objectives. We define the incentives one agent has to signal toand deceive another agent. We present several examples of deceptive artificial agents and show that ourdefinition has desirable properties.

Conference paper

Irwin B, Rago A, Toni F, 2022, Forecasting argumentation frameworks, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, Pages: 533-543, ISSN: 2334-1033

We introduce Forecasting Argumentation Frameworks(FAFs), a novel argumentation-based methodology forforecasting informed by recent judgmental forecastingresearch. FAFs comprise update frameworks which empower(human or artificial) agents to argue over time about theprobability of outcomes, e.g. the winner of a politicalelection or a fluctuation in inflation rates, whilst flaggingperceived irrationality in the agents’ behaviour with a viewto improving their forecasting accuracy. FAFs include fiveargument types, amounting to standard pro/con arguments,as in bipolar argumentation, as well as novel proposalarguments and increase/decrease amendment arguments. Weadapt an existing gradual semantics for bipolar argumen-tation to determine the aggregated dialectical strength ofproposal arguments and define irrational behaviour. We thengive a simple aggregation function which produces a finalgroup forecast from rational agents’ individual forecasts.We identify and study properties of FAFs and conductan empirical evaluation which signals FAFs’ potential toincrease the forecasting accuracy of participants.

Conference paper

Jiang J, Rago A, Toni F, 2022, Should counterfactual explanations always be data instances?, XLoKR 2022: The Third Workshop on Explainable Logic-Based Knowledge Representation

Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value.

Conference paper

Rago A, Baroni P, Toni F, 2022, Explaining causal models with argumentation: the case of bi-variate reinforcement, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, Pages: 505-509, ISSN: 2334-1033

Causal models are playing an increasingly important role inmachine learning, particularly in the realm of explainable AI.We introduce a conceptualisation for generating argumenta-tion frameworks (AFs) from causal models for the purposeof forging explanations for the models’ outputs. The concep-tualisation is based on reinterpreting desirable properties ofsemantics of AFs as explanation moulds, which are meansfor characterising the relations in the causal model argumen-tatively. We demonstrate our methodology by reinterpretingthe property of bi-variate reinforcement as an explanationmould to forge bipolar AFs as explanations for the outputs ofcausal models. We perform a theoretical evaluation of theseargumentative explanations, examining whether they satisfy arange of desirable explanatory and argumentative propertie

Conference paper

Gaskell A, Miao Y, Toni F, Specia Let al., 2022, Logically consistent adversarial attacks for soft theorem provers, 31st International Joint Conference on Artificial Intelligence and the 25th European Conference on Artificial Intelligence, Publisher: International Joint Conferences on Artificial Intelligence, Pages: 4129-4135

Recent efforts within the AI community haveyielded impressive results towards “soft theoremproving” over natural language sentences using lan-guage models. We propose a novel, generativeadversarial framework for probing and improvingthese models’ reasoning capabilities. Adversarialattacks in this domain suffer from the logical in-consistency problem, whereby perturbations to theinput may alter the label. Our Logically consis-tent AdVersarial Attacker, LAVA, addresses this bycombining a structured generative process with asymbolic solver, guaranteeing logical consistency.Our framework successfully generates adversarialattacks and identifies global weaknesses commonacross multiple target models. Our analyses revealnaive heuristics and vulnerabilities in these mod-els’ reasoning capabilities, exposing an incompletegrasp of logical deduction under logic programs.Finally, in addition to effective probing of thesemodels, we show that training on the generatedsamples improves the target model’s performance.

Conference paper

Sukpanichnant P, Rago A, Lertvittayakumjorn P, Toni Fet al., 2022, Neural QBAFs: explaining neural networks under LRP-based argumentation frameworks, International Conference of the Italian Association for Artificial Intelligence, Publisher: Springer International Publishing, Pages: 429-444, ISSN: 0302-9743

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00154121&limit=30&person=true&page=2&respub-action=search.html