Results
- Showing results for:
- Reset all filters
Search results
-
Conference paperNguyen H-T, Goebel R, Toni F, et al., 2023,
How well do SOTA legal reasoning models support abductive reasoning?
, Logic Programming and Legal Reasoning Workshop@ICLP2023We examine how well the state-of-the-art (SOTA) models used in legal reasoning support abductivereasoning tasks. Abductive reasoning is a form of logical inference in which a hypothesis is formulatedfrom a set of observations, and that hypothesis is used to explain the observations. The ability toformulate such hypotheses is important for lawyers and legal scholars as it helps them articulate logicalarguments, interpret laws, and develop legal theories. Our motivation is to consider the belief thatdeep learning models, especially large language models (LLMs), will soon replace lawyers because theyperform well on tasks related to legal text processing. But to do so, we believe, requires some form ofabductive hypothesis formation. In other words, while LLMs become more popular and powerful, wewant to investigate their capacity for abductive reasoning. To pursue this goal, we start by building alogic-augmented dataset for abductive reasoning with 498,697 samples and then use it to evaluate theperformance of a SOTA model in the legal field. Our experimental results show that although thesemodels can perform well on tasks related to some aspects of legal text processing, they still fall short insupporting abductive reasoning tasks.
-
Journal articleLertvittayakumjorn P, Toni F, 2023,
Argumentative explanations for pattern-based text classifiers
, Argument and Computation, Vol: 14, Pages: 163-234, ISSN: 1946-2174Recent works in Explainable AI mostly address the transparency issue of black-box models or create explanations for any kind of models (i.e., they are model-agnostic), while leaving explanations of interpretable models largely underexplored. In this paper, we fill this gap by focusing on explanations for a specific interpretable model, namely pattern-based logistic regression (PLR) for binary text classification. We do so because, albeit interpretable, PLR is challenging when it comes to explanations. In particular, we found that a standard way to extract explanations from this model does not consider relations among the features, making the explanations hardly plausible to humans. Hence, we propose AXPLR, a novel explanation method using (forms of) computational argumentation to generate explanations (for outputs computed by PLR) which unearth model agreements and disagreements among the features. Specifically, we use computational argumentation as follows: we see features (patterns) in PLR as arguments in a form of quantified bipolar argumentation frameworks (QBAFs) and extract attacks and supports between arguments based on specificity of the arguments; we understand logistic regression as a gradual semantics for these QBAFs, used to determine the arguments’ dialectic strength; and we study standard properties of gradual semantics for QBAFs in the context of our argumentative re-interpretation of PLR, sanctioning its suitability for explanatory purposes. We then show how to extract intuitive explanations (for outputs computed by PLR) from the constructed QBAFs. Finally, we conduct an empirical evaluation and two experiments in the context of human-AI collaboration to demonstrate the advantages of our resulting AXPLR method.
-
Conference paperLeofante F, Lomuscio A, 2023,
Towards robust contrastive explanations for human-neural multi-agent systems
, International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2023), Publisher: ACM, Pages: 2343-2345Generating explanations of high quality is fundamental to the development of trustworthy human-AI interactions. We here study the problem of generating contrastive explanations with formal robustness guarantees. We formalise a new notion of robustness and introduce two novel verification-based algorithms to (i) identify non-robust explanations generated by other methods and (ii) generate contrastive explanations augmented with provablerobustness certificates. We present an implementation and evaluate the utility of the approach on two case studies concerning neural agents trainedon credit scoring and image classification tasks.
-
Journal articleRago A, Russo F, Albini E, et al., 2023,
Explaining classifiers’ outputs with causal models and argumentation
, Journal of Applied Logics, Vol: 10, Pages: 421-449, ISSN: 2631-9810We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for mod-els’ outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the ex-tracted bipolar AFs may be used as relation-based explanations for the outputs of causal models. We then evaluate our method empirically when the causal models represent (Bayesian and neural network) machine learning models for classification. The results show advantages over a popular approach from the literature, both in highlighting specific relationships between feature and classification variables and in generating counterfactual explanations with respect to a commonly used metric.
-
Conference paperSanthirasekaram A, Kori A, Winkler M, et al., 2023,
Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification
, Computer Vision and Pattern Recognition -
Journal articleAlbini E, Rago A, Baroni P, et al., 2023,
Achieving descriptive accuracy in explanations via argumentation: the case of probabilistic classifiers
, Frontiers in Artificial Intelligence, Vol: 6, Pages: 1-18, ISSN: 2624-8212The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.
-
Journal articleFlageat M, Chalumeau F, Cully A, 2023,
Empirical analysis of PGA-MAP-Elites for neuroevolution in uncertain domains
, ACM Transactions on Evolutionary Learning and Optimization, Vol: 3, Pages: 1-32, ISSN: 2688-299XQuality-Diversity algorithms, among which MAP-Elites, have emerged as powerful alternatives to performance-only optimisation approaches as they enable generating collections of diverse and high-performing solutions to an optimisation problem. However, they are often limited to low-dimensional search spaces and deterministic environments. The recently introduced Policy Gradient Assisted MAP-Elites (PGA-MAP-Elites) algorithm overcomes this limitation by pairing the traditional Genetic operator of MAP-Elites with a gradient-based operator inspired by Deep Reinforcement Learning. This new operator guides mutations toward high-performing solutions using policy-gradients. In this work, we propose an in-depth study of PGA-MAP-Elites. We demonstrate the benefits of policy-gradients on the performance of the algorithm and the reproducibility of the generated solutions when considering uncertain domains. We first prove that PGA-MAP-Elites is highly performant in both deterministic and uncertain high-dimensional environments, decorrelating the two challenges it tackles. Secondly, we show that in addition to outperforming all the considered baselines, the collections of solutions generated by PGA-MAP-Elites are highly reproducible in uncertain environments, approaching the reproducibility of solutions found by Quality-Diversity approaches built specifically for uncertain applications. Finally, we propose an ablation and in-depth analysis of the dynamic of the policy-gradients-based variation. We demonstrate that the policy-gradient variation operator is determinant to guarantee the performance of PGA-MAP-Elites but is only essential during the early stage of the process, where it finds high-performing regions of the search space.
-
Conference paperChalumeau F, Boige R, Lim BWT, et al., 2023,
Neuroevolution is a Competitive Alternative to Reinforcement Learning for Skill Discovery
, The 11th International Conference on Learning Representations (ICLR) 2023 -
Conference paperSurana S, Lim BWT, Cully A, 2023,
Efficient Learning of Locomotion Skills through the Discovery of Diverse Environmental Trajectory Generator Priors
, IEEE International Conference on Robotics and Automation, ISSN: 2152-4092 -
Journal articleCheng S, Chen J, Anastasiou C, et al., 2023,
Generalised latent assimilation in heterogeneous reduced spaces with machine learning surrogate models
, Journal of Scientific Computing, Vol: 94, Pages: 1-37, ISSN: 0885-7474Reduced-order modelling and low-dimensional surrogate models generated using machine learning algorithms have been widely applied in high-dimensional dynamical systems to improve the algorithmic efficiency. In this paper, we develop a system which combines reduced-order surrogate models with a novel data assimilation (DA) technique used to incorporate real-time observations from different physical spaces. We make use of local smooth surrogate functions which link the space of encoded system variables and the one of current observations to perform variational DA with a low computational cost. The new system, named generalised latent assimilation can benefit both the efficiency provided by the reduced-order modelling and the accuracy of data assimilation. A theoretical analysis of the difference between surrogate and original assimilation cost function is also provided in this paper where an upper bound, depending on the size of the local training set, is given. The new approach is tested on a high-dimensional (CFD) application of a two-phase liquid flow with non-linear observation operators that current Latent Assimilation methods can not handle. Numerical results demonstrate that the proposed assimilation approach can significantly improve the reconstruction and prediction accuracy of the deep learning surrogate model which is nearly 1000 times faster than the CFD simulation.
-
Conference paperJiang J, Lan J, Leofante F, et al., 2023,
Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation.
, Publisher: PMLR, Pages: 582-597 -
Conference paperNguyen HT, Goebel R, Toni F, et al., 2023,
LawGiBa – Combining GPT, knowledge bases, and logic programming in a legal assistance system
, JURIX 2023: The Thirty-sixth Annual Conference, Maastricht, the Netherlands, 18–20 December 2023, Publisher: IOS Press, Pages: 371-374, ISSN: 0922-6389We present LawGiBa, a proof-of-concept demonstration system for legal assistance that combines GPT, legal knowledge bases, and Prolog’s logic programming structure to provide explanations for legal queries. This novel combination effectively and feasibly addresses the hallucination issue of large language models (LLMs) in critical domains, such as law. Through this system, we demonstrate how incorporating a legal knowledge base and logical reasoning can enhance the accuracy and reliability of legal advice provided by AI models like GPT. Though our work is primarily a demonstration, it provides a framework to explore how knowledge bases and logic programming structures can be further integrated with generative AI systems, to achieve improved results across various natural languages and legal systems.
-
Journal articleLeofante F, 2023,
OMTPlan: a tool for optimal planning modulo theories
, Journal of Satisfiability, Boolean Modeling and Computation, Vol: 14, Pages: 17-23, ISSN: 1574-0617OMTPlan is a Python platform for optimal planning in numeric domains via reductions to Satis -ability Modulo Theories (SMT) and OptimizationModulo Theories (OMT). Currently, OMTPlan supports the expressive power of PDDL2.1 level 2 andfeatures procedures for both satis cing and optimal planning. OMTPlan provides an open, easyto extend, yet e cient implementation framework.These goals are achieved through a modular designand the extensive use of state-of-the-art systemsfor SMT/OMT solving.
-
Journal articleGong H, Cheng S, Chen Z, et al., 2022,
An efficient digital twin based on machine learning SVD autoencoder and generalised latent assimilation for nuclear reactor physics
, ANNALS OF NUCLEAR ENERGY, Vol: 179, ISSN: 0306-4549- Author Web Link
- Cite
- Citations: 12
-
Journal articleGrillotti L, Cully A, 2022,
Unsupervised behaviour discovery with quality-diversity optimisation
, IEEE Transactions on Evolutionary Computation, Vol: 26, Pages: 1539-1552, ISSN: 1089-778XQuality-Diversity algorithms refer to a class of evolutionary algorithms designed to find a collection of diverse and high-performing solutions to a given problem. In robotics, such algorithms can be used for generating a collection of controllers covering most of the possible behaviours of a robot. To do so, these algorithms associate a behavioural descriptor to each of these behaviours. Each behavioural descriptor is used for estimating the novelty of one behaviour compared to the others. In most existing algorithms, the behavioural descriptor needs to be hand-coded, thus requiring prior knowledge about the task to solve. In this paper, we introduce: Autonomous Robots Realising their Abilities, an algorithm that uses a dimensionality reduction technique to automatically learn behavioural descriptors based on raw sensory data. The performance of this algorithm is assessed on three robotic tasks in simulation. The experimental results show that it performs similarly to traditional hand-coded approaches without the requirement to provide any hand-coded behavioural descriptor. In the collection of diverse and high-performing solutions, it also manages to find behaviours that are novel with respect to more features than its hand-coded baselines. Finally, we introduce a variant of the algorithm which is robust to the dimensionality of the behavioural descriptor space.
-
Conference paperZhang K, Toni F, Williams M, 2022,
A federated cox model with non-proportional hazards
, The 6th International Workshop on Health Intelligence, Publisher: Springer, Pages: 171-185, ISSN: 1860-949XRecent research has shown the potential for neural networksto improve upon classical survival models such as the Cox model, whichis widely used in clinical practice. Neural networks, however, typicallyrely on data that are centrally available, whereas healthcare data arefrequently held in secure silos. We present a federated Cox model thataccommodates this data setting and also relaxes the proportional hazardsassumption, allowing time-varying covariate effects. In this latter respect,our model does not require explicit specification of the time-varying ef-fects, reducing upfront organisational costs compared to previous works.We experiment with publicly available clinical datasets and demonstratethat the federated model is able to perform as well as a standard model.
-
Conference paperCretu A-M, Houssiau F, Cully A, et al., 2022,
QuerySnout: automating the discovery of attribute inference attacks against query-based systems
, CCS '22: 2022 ACM SIGSAC Conference on Computer and Communications Security, Publisher: ACM, Pages: 623-637Although query-based systems (QBS) have become one of the main solutions to share data anonymously, building QBSes that robustly protect the privacy of individuals contributing to the dataset is a hard problem. Theoretical solutions relying on differential privacy guarantees are difficult to implement correctly with reasonable accuracy, while ad-hoc solutions might contain unknown vulnerabilities. Evaluating the privacy provided by QBSes must thus be done by evaluating the accuracy of a wide range of privacy attacks. However, existing attacks against QBSes require time and expertise to develop, need to be manually tailored to the specific systems attacked, and are limited in scope. In this paper, we develop QuerySnout, the first method to automatically discover vulnerabilities in query-based systems. QuerySnout takes as input a target record and the QBS as a black box, analyzes its behavior on one or more datasets, and outputs a multiset of queries together with a rule to combine answers to them in order to reveal the sensitive attribute of the target record. QuerySnout uses evolutionary search techniques based on a novel mutation operator to find a multiset of queries susceptible to lead to an attack, and a machine learning classifier to infer the sensitive attribute from answers to the queries selected. We showcase the versatility of QuerySnout by applying it to two attack scenarios (assuming access to either the private dataset or to a different dataset from the same distribution), three real-world datasets, and a variety of protection mechanisms. We show the attacks found by QuerySnout to consistently equate or outperform, sometimes by a large margin, the best attacks from the literature. We finally show how QuerySnout can be extended to QBSes that require a budget, and apply QuerySnout to a simple QBS based on the Laplace mechanism. Taken together, our results show how powerful and accurate attacks against QBSes can already be found by an automated system, allo
-
Journal articleChagot L, Quilodran-Casas C, Kalli M, et al., 2022,
Surfactant-laden droplet size prediction in a flow-focusing microchannel: a data-driven approach
, LAB ON A CHIP, Vol: 22, Pages: 3848-3859, ISSN: 1473-0197- Cite
- Citations: 7
-
Conference paperAlbini E, Rago A, Baroni P, et al., 2022,
Descriptive accuracy in explanations: the case of probabilistic classifiers
, 15th International Conference on Scalable Uncertainty Management (SUM 2022), Publisher: Springer, Pages: 279-294A user receiving an explanation for outcomes produced by an artificially intelligent system expects that it satisfies the key property of descriptive accuracy (DA), i.e. that the explanation contents are in correspondence with the internal working of the system. Crucial as this property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature and a novel form of explanation that we propose and complement our analysis with experiments carried out on a varied selection of concrete probabilistic classifiers.
-
Conference paperMaurizio P, Toni F, 2022,
Learning assumption-based argumentation frameworks
, 31st International Conference on Inductive Logic Programming (ILP 2022). We propose a novel approach to logic-based learning whichgenerates assumption-based argumentation (ABA) frameworks from positive and negative examples, using a given background knowledge. TheseABA frameworks can be mapped onto logic programs with negationas failure that may be non-stratified. Whereas existing argumentationbased methods learn exceptions to general rules by interpreting the exceptions as rebuttal attacks, our approach interprets them as undercutting attacks. Our learning technique is based on the use of transformationrules, including some adapted from logic program transformation rules(notably folding) as well as others, such as rote learning and assumptionintroduction. We present a general strategy that applies the transformation rules in a suitable order to learn stratified frameworks, and we alsopropose a variant that handles the non-stratified case. We illustrate thebenefits of our approach with a number of examples, which show that,on one hand, we are able to easily reconstruct other logic-based learningapproaches and, on the other hand, we can work out in a very simpleand natural way problems that seem to be hard for existing techniques.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact us
Artificial Intelligence Network
South Kensington Campus
Imperial College London
SW7 2AZ
To reach the elected speaker of the network, Dr Rossella Arcucci, please contact:
To reach the network manager, Diana O'Malley - including to join the network - please contact: