Imperial College London

DrAntonioRago

Faculty of EngineeringDepartment of Computing

Research Associate
 
 
 
//

Contact

 

a.rago Website

 
 
//

Location

 

429Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

44 results found

Jiang J, Rago A, Leofante F, Toni Fet al., 2023, Recourse under model multiplicity via argumentative ensembling, The 23rd International Conference on Autonomous Agents and Multi-Agent Systems, Publisher: ACM

Model Multiplicity (MM) arises when multiple, equally performing machine learning models can be trained to solve the same prediction task. Recent studies show that models obtained under MM may produce inconsistent predictions for the same input. When this occurs, it becomes challenging to provide counterfactual explanations(CEs), a common means for offering recourse recommendations to individuals negatively affected by models’ predictions. In this paper, we formalise this problem, which we name recourse-aware ensembling, and identify several desirable properties which methods for solving it should satisfy. We demonstrate that existing ensemblingmethods, naturally extended in different ways to provide CEs, fail to satisfy these properties. We then introduce argumentative ensembling, deploying computational argumentation as a means to guarantee robustness of CEs to MM, while also accommodating customisable user preferences. We show theoretically and experimentally that argumentative ensembling is able to satisfy propertieswhich the existing methods lack, and that the trade-offs are minimal wrt the ensemble’s accuracy.

Conference paper

Delaney B, Dominguez J, Prociuk D, Toni F, Curcin V, Darzi A, Marovic B, Cyras K, Cocarascu O, Ruiz F, Mi E, Mi E, Ramtale C, Rago Aet al., 2023, ROAD2H: development and evaluation of an open-sourceexplainable artificial intelligence approach for managingco-morbidity and clinical guidelines, Learning Health Systems, ISSN: 2379-6146

IntroductionClinical decision support (CDS) systems (CDSSs) that integrate clinical guidelines need to reflect real-world co-morbidity. In patient-specific clinical contexts, transparent recommendations that allow for contraindications and other conflicts arising from co-morbidity are a requirement. In this work, we develop and evaluate a non-proprietary, standards-based approach to the deployment of computable guidelines with explainable argumentation, integrated with a commercial electronic health record (EHR) system in Serbia, a middle-income country in West Balkans.MethodsWe used an ontological framework, the Transition-based Medical Recommendation (TMR) model, to represent, and reason about, guideline concepts, and chose the 2017 International global initiative for chronic obstructive lung disease (GOLD) guideline and a Serbian hospital as the deployment and evaluation site, respectively. To mitigate potential guideline conflicts, we used a TMR-based implementation of the Assumptions-Based Argumentation framework extended with preferences and Goals (ABA+G). Remote EHR integration of computable guidelines was via a microservice architecture based on HL7 FHIR and CDS Hooks. A prototype integration was developed to manage chronic obstructive pulmonary disease (COPD) with comorbid cardiovascular or chronic kidney diseases, and a mixed-methods evaluation was conducted with 20 simulated cases and five pulmonologists.ResultsPulmonologists agreed 97% of the time with the GOLD-based COPD symptom severity assessment assigned to each patient by the CDSS, and 98% of the time with one of the proposed COPD care plans. Comments were favourable on the principles of explainable argumentation; inclusion of additional co-morbidities was suggested in the future along with customisation of the level of explanation with expertise.ConclusionAn ontological model provided a flexible means of providing argumentation and explainable artificial intelligence for a long-term condition. Exte

Journal article

Jiang J, Lan J, Leofante F, Rago A, Toni Fet al., 2023, Provably Robust and Plausible Counterfactual Explanations for Neural Networks via Robust Optimisation, The 15th Asian Conference on Machine Learning

Conference paper

Rago A, Gorur D, Toni F, 2023, ArguCast: a system for online multi-forecasting with gradual argumentation, Knowledge Representation 2023, Publisher: CEUR-WS.org, Pages: 40-51

Judgmental forecasting is a form of forecasting which employs (human) users to make predictions about specied future events. Judgmental forecasting has been shown to perform better than quantitative methods for forecasting, e.g. when historical data is unavailable or causal reasoning is needed. However, it has a number of limitations, arising from users’ irrationality and cognitive biases. To mitigate against these phenomena, we leverage on computational argumentation, a eld which excels in the representation and resolution of conicting knowledge and human-like reasoning, and propose novel ArguCast frameworks (ACFs) and the novel online system ArguCast, integrating ACFs. ACFs and ArguCast accommodate multi-forecasting, by allowing multiple users to debate on multiple forecasting predictions simultaneously, each potentially admitting multiple outcomes. Finally, we propose a novel notion of user rationality in ACFs based on votes on arguments in ACFs, allowing the ltering out of irrational opinions before obtaining group forecasting predictions by means commonly used in judgmental forecasting.

Conference paper

Rago A, Li H, Toni F, 2023, Interactive explanations by conflict resolution via argumentative exchanges, 20th International Conference on Principles of Knowledge Representation and Reasoning (KR2023), Publisher: IJCAI Organization, Pages: 582-592, ISSN: 2334-1033

As the field of explainable AI (XAI) is maturing, calls forinteractive explanations for (the outputs of) AI models aregrowing, but the state-of-the-art predominantly focuses onstatic explanations. In this paper, we focus instead on interactive explanations framed as conflict resolution between agents (i.e. AI models and/or humans) by leveraging on computational argumentation. Specifically, we define Argumentative eXchanges (AXs) for dynamically sharing, in multi-agent systems, information harboured in individual agents’ quantitative bipolar argumentation frameworks towards resolving conflicts amongst the agents. We then deploy AXs in the XAI setting in which a machine and a human interact about the machine’s predictions. We identify and assess several theoretical properties characterising AXs that are suitable for XAI. Finally, we instantiate AXs for XAI by defining various agent behaviours, e.g. capturing counterfactual patterns of reasoning in machines and highlighting the effects ofcognitive biases in humans. We show experimentally (in asimulated environment) the comparative advantages of these behaviours in terms of conflict resolution, and show that the strongest argument may not always be the most effective.

Conference paper

Toni F, Rago A, Cyras K, 2023, Forecasting with jury-based probabilistic argumentation, Journal of Applied Non Classical Logics, Vol: 33, Pages: 224-243, ISSN: 1166-3081

Probabilistic Argumentation supports a form of hybrid reasoning by integratingquantitative (probabilistic) reasoning and qualitative argumentation in a naturalway. Jury-based Probabilistic Argumentation supports the combination of opinionsby different reasoners. In this paper we show how Jury-based Probabilistic Abstract Argumentation (JPAA) and a form of Jury-based Probabilistic Assumptionbased Argumentation (JPABA) can naturally support forecasting, whereby subjective probability estimates are combined to make predictions about future occurrences of events. The form of JPABA we consider is an instance of JPAA andresults from integrating Assumption-Based Argumentation (ABA) and probabilityspaces expressed by Bayesian networks, under the so-called constellation approach.It keeps the underlying structured argumentation and probabilistic reasoning modules separate while integrating them. We show how JPAA and (the considered formof) JPABA can be used to support forecasting by 1) supporting different forecasters (jurors) to determine the probability of arguments (and, in the JPABA case,sentences) with respect to their own probability spaces, while sharing arguments(and their components); and 2) supporting the aggregation of individual forecaststo produce group forecasts.

Journal article

Jiang J, Leofante F, Rago A, Toni Fet al., 2023, Formalising the robustness of counterfactual explanations for neural networks, 37th AAAI Conference on Artificial Intelligence (AAAI 2023), Publisher: Association for the Advancement of Artificial Intelligence, Pages: 14901-14909, ISSN: 2374-3468

The use of counterfactual explanations (CFXs) is an increasingly popular explanation strategy for machine learning models. However, recent studies have shown that these explanations may not be robust to changes in the underlying model (e.g., following retraining), which raises questions about their reliability in real-world applications. Existing attempts towards solving this problem are heuristic, and the robustness to model changes of the resulting CFXs is evaluated with only a small number of retrained models, failing to provide exhaustive guarantees. To remedy this, we propose the first notion to formally and deterministically assess the robustness (to model changes) of CFXs for neural networks, that we call ∆-robustness. We introduce an abstraction framework based on interval neural networks to verify the ∆-robustness of CFXs against a possibly infinite set of changes to the model parameters, i.e., weights and biases. We then demonstrate the utility of this approach in two distinct ways. First, we analyse the ∆-robustness of a number of CFX generation methods from the literature and show that they unanimously host significant deficiencies in this regard. Second, we demonstrate how embedding ∆-robustness within existing methods can provide CFXs which are provably robust.

Conference paper

Rago A, Russo F, Albini E, Toni F, Baroni Pet al., 2023, Explaining classifiers’ outputs with causal models and argumentation, Journal of Applied Logics, Vol: 10, Pages: 421-449, ISSN: 2631-9810

We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for mod-els’ outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the ex-tracted bipolar AFs may be used as relation-based explanations for the outputs of causal models. We then evaluate our method empirically when the causal models represent (Bayesian and neural network) machine learning models for classification. The results show advantages over a popular approach from the literature, both in highlighting specific relationships between feature and classification variables and in generating counterfactual explanations with respect to a commonly used metric.

Journal article

Albini E, Rago A, Baroni P, Toni Fet al., 2023, Achieving descriptive accuracy in explanations via argumentation: the case of probabilistic classifiers, Frontiers in Artificial Intelligence, Vol: 6, Pages: 1-18, ISSN: 2624-8212

The pursuit of trust in and fairness of AI systems in order to enable human-centric goals has been gathering pace of late, often supported by the use of explanations for the outputs of these systems. Several properties of explanations have been highlighted as critical for achieving trustworthy and fair AI systems, but one that has thus far been overlooked is that of descriptive accuracy (DA), i.e., that the explanation contents are in correspondence with the internal working of the explained system. Indeed, the violation of this core property would lead to the paradoxical situation of systems producing explanations which are not suitably related to how the system actually works: clearly this may hinder user trust. Further, if explanations violate DA then they can be deceitful, resulting in an unfair behavior toward the users. Crucial as the DA property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalizing DA and of analyzing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature, variants thereof and a novel form of explanation that we propose. We conduct experiments with a varied selection of concrete probabilistic classifiers and highlight the importance, with a user study, of our most demanding notion of dialectical DA, which our novel method satisfies by design and others may violate. We thus demonstrate how DA could be a critical component in achieving trustworthy and fair systems, in line with the principles of human-centric AI.

Journal article

Cocarascu O, Doutre S, Mailly JG, Rago Aet al., 2023, Preface, CEUR Workshop Proceedings, Vol: 3472, ISSN: 1613-0073

Journal article

Albini E, Rago A, Baroni P, Toni Fet al., 2022, Descriptive accuracy in explanations: the case of probabilistic classifiers, 15th International Conference on Scalable Uncertainty Management (SUM 2022), Publisher: Springer, Pages: 279-294

A user receiving an explanation for outcomes produced by an artificially intelligent system expects that it satisfies the key property of descriptive accuracy (DA), i.e. that the explanation contents are in correspondence with the internal working of the system. Crucial as this property appears to be, it has been somehow overlooked in the XAI literature to date. To address this problem, we consider the questions of formalising DA and of analysing its satisfaction by explanation methods. We provide formal definitions of naive, structural and dialectical DA, using the family of probabilistic classifiers as the context for our analysis. We evaluate the satisfaction of our given notions of DA by several explanation methods, amounting to two popular feature-attribution methods from the literature and a novel form of explanation that we propose and complement our analysis with experiments carried out on a varied selection of concrete probabilistic classifiers.

Conference paper

Jiang J, Rago A, Toni F, 2022, Should counterfactual explanations always be data instances?, XLoKR 2022: The Third Workshop on Explainable Logic-Based Knowledge Representation

Counterfactual explanations (CEs) are an increasingly popular way of explaining machine learning classifiers. Predominantly, they amount to data instances pointing to potential changes to the inputs that would lead to alternative outputs. In this position paper we question the widespread assumption that CEs should always be data instances, and argue instead that in some cases they may be better understood in terms of special types of relations between input features and classification variables. We illustrate how a special type of these relations, amounting to critical influences, can characterise and guide the search for data instances deemed suitable as CEs. These relations also provide compact indications of which input features - rather than their specific values in data instances - have counterfactual value.

Conference paper

Rago A, Baroni P, Toni F, 2022, Explaining causal models with argumentation: the case of bi-variate reinforcement, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, Pages: 505-509, ISSN: 2334-1033

Causal models are playing an increasingly important role inmachine learning, particularly in the realm of explainable AI.We introduce a conceptualisation for generating argumenta-tion frameworks (AFs) from causal models for the purposeof forging explanations for the models’ outputs. The concep-tualisation is based on reinterpreting desirable properties ofsemantics of AFs as explanation moulds, which are meansfor characterising the relations in the causal model argumen-tatively. We demonstrate our methodology by reinterpretingthe property of bi-variate reinforcement as an explanationmould to forge bipolar AFs as explanations for the outputs ofcausal models. We perform a theoretical evaluation of theseargumentative explanations, examining whether they satisfy arange of desirable explanatory and argumentative propertie

Conference paper

Irwin B, Rago A, Toni F, 2022, Forecasting argumentation frameworks, 19th International Conference on Principles of Knowledge Representation and Reasoning (KR 2022), Publisher: IJCAI Organisation, Pages: 533-543, ISSN: 2334-1033

We introduce Forecasting Argumentation Frameworks(FAFs), a novel argumentation-based methodology forforecasting informed by recent judgmental forecastingresearch. FAFs comprise update frameworks which empower(human or artificial) agents to argue over time about theprobability of outcomes, e.g. the winner of a politicalelection or a fluctuation in inflation rates, whilst flaggingperceived irrationality in the agents’ behaviour with a viewto improving their forecasting accuracy. FAFs include fiveargument types, amounting to standard pro/con arguments,as in bipolar argumentation, as well as novel proposalarguments and increase/decrease amendment arguments. Weadapt an existing gradual semantics for bipolar argumen-tation to determine the aggregated dialectical strength ofproposal arguments and define irrational behaviour. We thengive a simple aggregation function which produces a finalgroup forecast from rational agents’ individual forecasts.We identify and study properties of FAFs and conductan empirical evaluation which signals FAFs’ potential toincrease the forecasting accuracy of participants.

Conference paper

Sukpanichnant P, Rago A, Lertvittayakumjorn P, Toni Fet al., 2022, Neural QBAFs: explaining neural networks under LRP-based argumentation frameworks, International Conference of the Italian Association for Artificial Intelligence, Publisher: Springer International Publishing, Pages: 429-444, ISSN: 0302-9743

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

Conference paper

Irwin B, Rago A, Toni F, 2022, Argumentative forecasting, AAMAS 2022, Publisher: ACM, Pages: 1636-1638

We introduce the Forecasting Argumentation Framework (FAF), anovel argumentation framework for forecasting informed by re-cent judgmental forecasting research. FAFs comprise update frame-works which empower (human or artificial) agents to argue overtime with and about probability of scenarios, whilst flagging per-ceived irrationality in their behaviour with a view to improvingtheir forecasting accuracy. FAFs include three argument types withfuture forecasts and aggregate the strength of these arguments toinform estimates of the likelihood of scenarios. We describe animplementation of FAFs for supporting forecasting agents.

Conference paper

Rago A, Russo F, Albini E, Baroni P, Toni Fet al., 2022, Forging argumentative explanations from causal models, Proceedings of the 5th Workshop on Advances in Argumentation in Artificial Intelligence 2021 co-located with the 20th International Conference of the Italian Association for Artificial Intelligence (AIxIA 2021), Publisher: CEUR Workshop Proceedings, Pages: 1-15, ISSN: 1613-0073

We introduce a conceptualisation for generating argumentation frameworks (AFs) from causal models for the purpose of forging explanations for models' outputs. The conceptualisation is based on reinterpreting properties of semantics of AFs as explanation moulds, which are means for characterising argumentative relations. We demonstrate our methodology by reinterpreting the property of bi-variate reinforcement in bipolar AFs, showing how the extracted bipolar AFs may be used as relation-based explanations for the outputs of causal models.

Conference paper

Čyras K, Kampik T, Cocarascu O, Rago A, Amgoud L, Baroni P, Bassiliades N, Black E, Calegari R, Collins A, Delobelle J, Fan X, García AJ, Hunter A, Kakas A, Kökciyan N, Liao B, Luo J, Morveli-Espinoza M, Mosca F, Nieves JC, Panisson AR, Parsons S, Potyka N, Prakken H, Rienstra T, Rodrigues O, Saribatur ZG, Sassoon I, Sklar E, Straßer C, Tohme F, Ulbricht M, Villata S, Vassiliades A, Wallner J, van Woerkom W, Molinet Bet al., 2022, Preface, ISSN: 1613-0073

Conference paper

Sukpanichnant P, Rago A, Lertvittayakumjorn P, Toni Fet al., 2021, LRP-based argumentative explanations for neural networks, XAI.it 2021 - Italian Workshop on Explainable Artificial Intelligence, Pages: 71-84, ISSN: 1613-0073

In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2021, Influence-driven explanations for bayesian network classifiers, PRICAI 2021, Publisher: Springer Verlag, Pages: 88-100, ISSN: 0302-9743

We propose a novel approach to buildinginfluence-driven ex-planations(IDXs) for (discrete) Bayesian network classifiers (BCs). IDXsfeature two main advantages wrt other commonly adopted explanationmethods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing adeep, rather than shallow, account of theBCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count to-wards explanations. Our approach is thusflexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances cap-turing existing approaches. We demonstrate IDXs’ capability to explainvarious forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses.

Conference paper

Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2021, Argumentation as a framework for interactive explanations for recommendations, KR 2020, 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI, Pages: 805-815, ISSN: 2334-1033

As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

Conference paper

Cyras K, Rago A, Emanuele A, Baroni P, Toni Fet al., 2021, Argumentative XAI: a survey, The 30th International Joint Conference on Artificial Intelligence (IJCAI-21), Publisher: International Joint Conferences on Artificial Intelligence, Pages: 4392-4399

Explainable AI (XAI) has been investigated for decades and, together with AI itself, has witnessed unprecedented growth in recent years. Among various approaches to XAI, argumentative models have been advocated in both the AI and social science literature, as their dialectical nature appears to match some basic desirable features of the explanation activity. In this survey we overview XAI approaches built using methods from the field of computational argumentation, leveraging its wide array of reasoning abstractions and explanation delivery methods. We overview the literature focusing on different types of explanation (intrinsic and post-hoc), different models with which argumentation-based explanations are deployed, different forms of delivery, and different argumentation frameworks they use. We also lay out a roadmap for future work.

Conference paper

Cocarascu O, Cyras K, Rago A, Toni Fet al., 2021, Mining property-driven graphical explanations for data-centric AI from argumentation frameworks, Human-Like Machine Intelligence, Pages: 93-113, ISBN: 9780198862536

Book chapter

Albini E, Baroni P, Rago A, Toni Fet al., 2021, Interpreting and explaining pagerank through argumentation semantics, Intelligenza Artificiale, Vol: 15, Pages: 17-34, ISSN: 1724-8035

In this paper we show how re-interpreting PageRank as an argumentation semantics for a bipolar argumentation framework empowers its explainability. After showing that PageRank, naively re-interpreted as an argumentation semantics for support frameworks, fails to satisfy some generally desirable properties, we propose a novel approach able to reconstruct PageRank as a gradual semantics of a suitably defined bipolar argumentation framework, while satisfying these properties. We then show how the theoretical advantages afforded by this approach also enjoy an enhanced explanatory power: we propose several types of argument-based explanations for PageRank, each of which focuses on different aspects of the algorithm and uncovers information useful for the comprehension of its results.

Journal article

Rago A, Cocarascu O, Bechlivanidis C, Lagnado D, Toni Fet al., 2021, Argumentative explanations for interactive recommendations, Artificial Intelligence, Vol: 296, Pages: 1-22, ISSN: 0004-3702

A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.

Journal article

Dejl A, He P, Mangal P, Mohsin H, Surdu B, Voinea E, Albini E, Lertvittayakumjorn P, Rago A, Toni Fet al., 2021, Argflow: a toolkit for deep argumentative explanations for neural networks, Proc. of the 20th International Conference on Autonomous Agents and Multiagent Systems, Pages: 1761-1763, ISSN: 1558-2914

In recent years, machine learning (ML) models have been successfully applied in a variety of real-world applications. However, theyare often complex and incomprehensible to human users. This candecrease trust in their outputs and render their usage in criticalsettings ethically problematic. As a result, several methods for explaining such ML models have been proposed recently, in particularfor black-box models such as deep neural networks (NNs). Nevertheless, these methods predominantly explain outputs in termsof inputs, disregarding the inner workings of the ML model computing those outputs. We present Argflow, a toolkit enabling thegeneration of a variety of ‘deep’ argumentative explanations (DAXs)for outputs of NNs on classification tasks.

Conference paper

Rago A, Albini E, Baroni P, Toni Fet al., 2021, Influence-driven explanations for bayesian network classifiers, Publisher: arXiv

One of the most pressing issues in AI in recent years has been the need toaddress the lack of explainability of many of its models. We focus onexplanations for discrete Bayesian network classifiers (BCs), targeting greatertransparency of their inner workings by including intermediate variables inexplanations, rather than just the input and output variables as is standardpractice. The proposed influence-driven explanations (IDXs) for BCs aresystematically generated using the causal relationships between variableswithin the BC, called influences, which are then categorised by logicalrequirements, called relation properties, according to their behaviour. Theserelation properties both provide guarantees beyond heuristic explanationmethods and allow the information underpinning an explanation to be tailored toa particular context's and user's requirements, e.g., IDXs may be dialecticalor counterfactual. We demonstrate IDXs' capability to explain various forms ofBCs, e.g., naive or multi-label, binary or categorical, and also integraterecent approaches to explanations for BCs from the literature. We evaluate IDXswith theoretical and empirical analyses, demonstrating their considerableadvantages when compared with existing explanation methods.

Working paper

Albini E, Baroni P, Rago A, Toni Fet al., 2020, PageRank as an Argumentation Semantics, Biennial International Conference on Computational Models of Argument (COMMA), Publisher: IOS PRESS, Pages: 55-66, ISSN: 0922-6389

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2020, Relation-Based Counterfactual Explanations for Bayesian Network Classifiers, The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)

Conference paper

Cocarascu O, Rago A, Toni F, 2020, Explanation via Machine Arguing, Pages: 53-84, ISBN: 9783030600662

As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this tutorial we will overview recent approaches showing how these challenges can be addressed by utilising forms of machine arguing as the scaffolding underpinning explanations that are delivered to users. Machine arguing amounts to the deployment of methods from computational argumentation in AI with suitably mined argumentation frameworks, which provide abstractions of “debates”. Computational argumentation has been widely used to support applications requiring information exchange between AI systems and users, facilitated by the fact that the capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in different ways, namely: by building explainable systems with argumentative foundations from linguistic data focusing on reviews), or by extracting argumentative reasoning from existin

Book chapter

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01019145&limit=30&person=true