Imperial College London

ProfessorFrancescaToni

Faculty of EngineeringDepartment of Computing

Professor in Computational Logic
 
 
 
//

Contact

 

+44 (0)20 7594 8228f.toni Website

 
 
//

Location

 

430Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

326 results found

Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2020, Argumentation as a framework for interactive explanations for rcommendations, 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI

As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

Conference paper

Albini E, Rago A, Baroni P, Toni Fet al., 2020, Relation-Based Counterfactual Explanations for Bayesian Network Classifiers, The 29th International Joint Conference on Artificial Intelligence (IJCAI 2020)

Conference paper

Cocarascu O, Cabrio E, Villata S, Toni Fet al., 2020, A dataset independent set of baselines for relation prediction in argument mining., Publisher: arXiv

Argument Mining is the research area which aims at extracting argument components and predicting argumentative relations (i.e.,support and attack) from text. In particular, numerous approaches have been proposed in the literature to predict the relations holding between the arguments, and application-specific annotated resources were built for this purpose. Despite the fact that these resources have been created to experiment on the same task, the definition of a single relation prediction method to be successfully applied to a significant portion of these datasets is an open research problem in Argument Mining. This means that none of the methods proposed in the literature can be easily ported from one resource to another. In this paper, we address this problem by proposing a set of dataset independent strong neural baselines which obtain homogeneous results on all the datasets proposed in the literature for the argumentative relation prediction task. Thus, our baselines can be employed by the Argument Mining community to compare more effectively how well a method performs on the argumentative relation prediction task.

Working paper

Cocarascu O, Stylianou A, Cyras K, Toni Fet al., 2020, Data-empowered argumentation for dialectically explainable predictions, 24th European Conference on Artificial Intelligence (ECAI 2020), Publisher: IOS Press

Today’s AI landscape is permeated by plentiful data anddominated by powerful data-centric methods with the potential toimpact a wide range of human sectors. Yet, in some settings this po-tential is hindered by these data-centric AI methods being mostlyopaque. Considerable efforts are currently being devoted to defin-ing methods for explaining black-box techniques in some settings,while the use of transparent methods is being advocated in others,especially when high-stake decisions are involved, as in healthcareand the practice of law. In this paper we advocate a novel transpar-ent paradigm of Data-Empowered Argumentation (DEAr in short)for dialectically explainable predictions. DEAr relies upon the ex-traction of argumentation debates from data, so that the dialecticaloutcomes of these debates amount to predictions (e.g. classifications)that can be explained dialectically. The argumentation debates con-sist of (data) arguments which may not be linguistic in general butmay nonetheless be deemed to be ‘arguments’ in that they are dialec-tically related, for instance by disagreeing on data labels. We illus-trate and experiment with the DEAr paradigm in three settings, mak-ing use, respectively, of categorical data, (annotated) images and text.We show empirically that DEAr is competitive with another transpar-ent model, namely decision trees (DTs), while also providing natu-rally dialectical explanations.

Conference paper

Baroni P, Toni F, Verheij B, 2020, On the acceptability of arguments and its fundamental role in nonmonotonic reasoning, logic programming and n-person games: 25 years later Foreword, ARGUMENT & COMPUTATION, Vol: 11, Pages: 1-14, ISSN: 1946-2166

Journal article

Cyras K, Karamlou A, Lee M, Letsios D, Misener R, Toni Fet al., 2020, AI-assisted Schedule Explainer for Nurse Rostering., Publisher: International Foundation for Autonomous Agents and Multiagent Systems, Pages: 2101-2103

Conference paper

Jha R, Belardinelli F, Toni F, 2020, Formal Verification of Debates in Argumentation Theory., CoRR, Vol: abs/1912.05828

Journal article

Jha R, Belardinelli F, Toni F, 2020, Formal verification of debates in argumentation theory., Publisher: ACM, Pages: 940-947

Conference paper

Altuncu MT, Sorin E, Symons JD, Mayer E, Yaliraki SN, Toni F, Barahona Met al., 2019, Extracting information from free text through unsupervised graph-based clustering: an application to patient incident records

The large volume of text in electronic healthcare records often remainsunderused due to a lack of methodologies to extract interpretable content. Herewe present an unsupervised framework for the analysis of free text thatcombines text-embedding with paragraph vectors and graph-theoretical multiscalecommunity detection. We analyse text from a corpus of patient incident reportsfrom the National Health Service in England to find content-based clusters ofreports in an unsupervised manner and at different levels of resolution. Ourunsupervised method extracts groups with high intrinsic textual consistency andcompares well against categories hand-coded by healthcare personnel. We alsoshow how to use our content-driven clusters to improve the supervisedprediction of the degree of harm of the incident based on the text of thereport. Finally, we discuss future directions to monitor reports over time, andto detect emerging trends outside pre-existing categories.

Book chapter

Lertvittayakumjorn P, Toni F, 2020, Human-grounded evaluations of explanation methods for text classification, 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Publisher: ACL Anthology

Due to the black-box nature of deep learning models, methods for explaining the models’ results are crucial to gain trust from humans and support collaboration between AIsand humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2)justifying model predictions, and (3) helping humans investigate uncertain predictions.The results highlight dissimilar qualities of thevarious explanation methods we consider andshow the degree to which these methods couldserve for each purpose.

Conference paper

Schulz C, Toni F, 2019, On the responsibility for undecisiveness in preferred and stable labellings in abstract argumentation (extended abstract), IJCAI International Joint Conference on Artificial Intelligence, Pages: 6382-6386, ISSN: 1045-0823

© 2019 International Joint Conferences on Artificial Intelligence. All rights reserved. Different semantics of abstract Argumentation Frameworks (AFs) provide different levels of decisiveness for reasoning about the acceptability of conflicting arguments. The stable semantics is useful for applications requiring a high level of decisiveness, as it assigns to each argument the label “accepted” or the label “rejected”. Unfortunately, stable labellings are not guaranteed to exist, thus raising the question as to which parts of AFs are responsible for the non-existence. In this paper, we address this question by investigating a more general question concerning preferred labellings (which may be less decisive than stable labellings but are always guaranteed to exist), namely why a given preferred labelling may not be stable and thus undecided on some arguments. In particular, (1) we give various characterisations of parts of an AF, based on the given preferred labelling, and (2) we show that these parts are indeed responsible for the undecisiveness if the preferred labelling is not stable. We then use these characterisations to explain the non-existence of stable labellings.

Conference paper

Čyras K, Birch D, Guo Y, Toni F, Dulay R, Turvey S, Greenberg D, Hapuarachchi Tet al., 2019, Explanations by arbitrated argumentative dispute, Expert Systems with Applications, Vol: 127, Pages: 141-156, ISSN: 0957-4174

Explaining outputs determined algorithmically by machines is one of the most pressing and studied problems in Artificial Intelligence (AI) nowadays, but the equally pressing problem of using AI to explain outputs determined by humans is less studied. In this paper we advance a novel methodology integrating case-based reasoning and computational argumentation from AI to explain outcomes, determined by humans or by machines, indifferently, for cases characterised by discrete (static) features and/or (dynamic) stages. At the heart of our methodology lies the concept of arbitrated argumentative disputesbetween two fictitious disputants arguing, respectively, for or against a case's output in need of explanation, and where this case acts as an arbiter. Specifically, in explaining the outcome of a case in question, the disputants put forward as arguments relevant cases favouring their respective positions, with arguments/cases conflicting due to their features, stages and outcomes, and the applicability of arguments/cases arbitrated by the features and stages of the case in question. We in addition use arbitrated dispute trees to identify the excess features that help the winning disputant to win the dispute and thus complement the explanation. We evaluate our novel methodology theoretically, proving desirable properties thereof, and empirically, in the context of primary legislation in the United Kingdom (UK), concerning the passage of Bills that may or may not become laws. High-level factors underpinning a Bill's passage are its content-agnostic features such as type, number of sponsors, ballot order, as well as the UK Parliament's rules of conduct. Given high numbers of proposed legislation (hundreds of Bills a year), it is hard even for legal experts to explain on a large scale why certain Bills pass or not. We show how our methodology can address this problem by automatically providing high-level explanations of why Bills pass or not, based on the given Bills and the

Journal article

Čyras K, Letsios D, Misener R, Toni Fet al., 2019, Argumentation for explainable scheduling, Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Publisher: AAAI, Pages: 2752-2759

Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of 'what-if' scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones.

Conference paper

Cocarascu O, Rago A, Toni F, 2019, From formal argumentation to conversational systems, 1st Workshop on Conversational Interaction Systems (WCIS 2019), Publisher: ACM

Arguing is amenable to humans and argumentation serves as anatural form of interaction in many settings. Several formal mod-els of argumentation have been proposed in the AI literature asabstractions of various forms of debates. We show how these mod-els can serve as the backbone of conversational systems that canexplain machine-computed outputs. These systems can engage inconversations with humans following templates instantiated onargumentation models that are automatically obtained from thedata analysis underpinning the machine-computed outputs. Asan illustration, we consider one such argumentation-empoweredconversational system and exemplify its use and benefits in twodifferent domains, for recommending movies and hotels based onthe aggregation of information drawn from reviews.

Conference paper

Karamlou A, Cyras K, Toni F, 2019, Complexity results and algorithms for bipolar argumentation, International Conference on Autonomous Agents and MultiAgent Systems, Publisher: ACM, Pages: 1713-1721

Bipolar Argumentation Frameworks (BAFs) admit several interpretations of the support relation and diverging definitions of semantics. Recently, several classes of BAFs have been captured as instances of bipolar Assumption-Based Argumentation, a class of Assumption-Based Argumentation (ABA). In this paper, we establish the complexity of bipolar ABA, and consequently of several classes of BAFs. In addition to the standard five complexity problems, we analyse the rarely-addressed extension enumeration problem too. We also advance backtracking-driven algorithms for enumerating extensions of bipolar ABA frameworks, and consequently of BAFs under several interpretations. We prove soundness and completeness of our algorithms, describe their implementation and provide a scalability evaluation. We thus contribute to the study of the as yet uninvestigated complexity problems of (variously interpreted) BAFs as well as of bipolar ABA, and provide the lacking implementations thereof.

Conference paper

Karamlou A, Cyras K, Toni F, 2019, Deciding the winner of a debate using bipolar argumentation, International Conference on Autonomous Agents and MultiAgent Systems, Publisher: IFAAMAS / ACM, Pages: 2366-2368, ISSN: 2523-5699

Bipolar Argumentation Frameworks (BAFs) are an important class of argumentation frameworks useful for capturing, reasoning with, and deriving conclusions from debates. They have the potential to make solid contributions to real-world multi-agent systems and human-agent interaction in domains such as legal reasoning, healthcare and politics. Despite this fact, practical systems implementing BAFs are largely lacking. In this demonstration, we provide a software system implementing novel algorithms for calculating extensions (winning sets of arguments) of BAFs. Participants in the demonstration will be able to input their own debates into our system, and watch a graphical representation of the algorithms as they process information and decide which sets of arguments are winners of the debate.

Conference paper

Cocarascu O, Rago A, Toni F, 2019, Extracting dialogical explanations for review aggregations with argumentative dialogical agents, International Conference on Autonomous Agents and MultiAgent Systems (AAMAS), Publisher: International Foundation for Autonomous Agents and Multiagent Systems

The aggregation of online reviews is fast becoming the chosen method of quality control for users in various domains, from retail to entertainment. Consequently, fair, thorough and explainable aggregation of reviews is increasingly sought-after. We consider the movie review domain, and in particular Rotten Tomatoes' ubiquitous (and arguably over-simplified) aggregation method, the Tomatometer Score (TS). For a movie, this amounts to the percentage of critics giving the movie a positive review. We define a novel form of argumentative dialogical agent (ADA) for explaining the reasoning within the reviews. ADA integrates: 1.) NLP with reviews to extract a Quantitative Bipolar Argumentation Framework (QBAF) for any chosen movie to provide the underlying structure of explanations, and 2.) gradual semantics for QBAFs for deriving a dialectical strength measure for movies, as an alternative to the TS, satisfying desirable properties for obtaining explanations. We evaluate ADA using some prominent NLP methods and gradual semantics for QBAFs. We show that they provide a dialectical strength which is comparable with the TS, while at the same time being able to provide dialogical explanations of why a movie obtained its strength via interactions between the user and ADA.

Conference paper

Zhong Q, Fan X, Luo X, Toni Fet al., 2019, An explainable multi-attribute decision model based on argumentation, Expert Systems with Applications, Vol: 117, Pages: 42-61, ISSN: 0957-4174

We present a multi-attribute decision model and a method for explaining the decisions it recommends based on an argumentative reformulation of the model. Specifically, (i) we define a notion of best (i.e., minimally redundant) decisions amounting to achieving as many goals as possible and exhibiting as few redundant attributes as possible, and (ii) we generate explanations for why a decision is best or better than or as good as another, using a mapping between the given decision model and an argumentation framework, such that best decisions correspond to admissible sets of arguments. Concretely, natural language explanations are generated automatically from dispute trees sanctioning the admissibility of arguments. Throughout, we illustrate the power of our approach within a legal reasoning setting, where best decisions amount to past cases that are most similar to a given new, open case. Finally, we conduct an empirical evaluation of our method with legal practitioners, confirming that our method is effective for the choice of most similar past cases and helpful to understand automatically generated recommendations.

Journal article

Baroni P, Rago A, Toni F, 2019, From fine-grained properties to broad principles for gradual argumentation: A principled spectrum, International Journal of Approximate Reasoning, Vol: 105, Pages: 252-286, ISSN: 0888-613X

The study of properties of gradual evaluation methods in argumentation has received increasing attention in recent years, with studies devoted to various classes of frameworks/ methods leading to conceptually similar but formally distinct properties in different contexts. In this paper we provide a novel systematic analysis for this research landscape by making three main contributions. First, we identify groups of conceptually related properties in the literature, which can be regarded as based on common patterns and, using these patterns, we evidence that many further novel properties can be considered. Then, we provide a simplifying and unifying perspective for these groups of properties by showing that they are all implied by novel parametric principles of (either strict or non-strict) balance and monotonicity. Finally, we show that (instances of) these principles (and thus the group, literature and novel properties that they imply) are satisfied by several quantitative argumentation formalisms in the literature, thus confirming the principles' general validity and utility to support a compact, yet comprehensive, analysis of properties of gradual argumentation.

Journal article

Rago A, 2019, Gradual Evaluation in Argumentation Frameworks: Methods, Properties and Applications

Gradual evaluation methods in argumentation frameworks provide semantics for assessing the gradual acceptance of arguments, differing from the qualitative semantics that have been used in argument evaluation since argumentation’s conception. These methods and their semantics are wide-ranging; they comprise those for group acceptance, probabilistic measures and game-theoretical strength, amongst many others. This affords numerous application areas and so the requisite behaviour for each needs to be justified by theoretical proofs of useful properties for a specific application.Our contributions to this field span three interweaving sub-categories, namely methods, properties and applications. For gradual evaluation methods, we develop a number of novel and useful methods themselves. For each method we detail the semantics’ and the frameworks’ definitions then undertake theoretical evaluations based on their properties, before applications targeting real-world problems are suggested for each method. As for gradual evaluation properties, we undertake a systematic analysis for this research landscape by first identifying groups of conceptually related properties in the literature and provide a simplifying and unifying perspective for these properties by showing that all the considered literature properties are implied by four, novel parametric principles. We then validate these principles by showing that they are satisfied by several quantitative argumentation formalisms in the literature. We also instantiate the extensive number of implied properties of these principles which are not present in the literature. These properties are also used to extract argumentation explanations for recommendations in recommender systems, a novel concept and application.

Thesis dissertation

Kotonya N, Toni F, 2019, Gradual Argumentation Evaluation for Stance Aggregation in Automated Fake News Detection, 6th Workshop on Argument Mining (ArgMining), Publisher: ASSOC COMPUTATIONAL LINGUISTICS-ACL, Pages: 156-166

Conference paper

Lertvittayakumjorn P, Toni F, 2019, Human-grounded Evaluations of Explanation Methods for Text Classification., Publisher: Association for Computational Linguistics, Pages: 5194-5204

Conference paper

Cyras K, Domínguez J, Karamlou A, Prociuk D, Curcin V, Delaney B, Toni F, Chalkidou K, Darzi Aet al., 2019, ROAD2H: Learning Decision Support System for Low- and Middle-Income Countries., Publisher: AMIA

Conference paper

Hart MG, Hunter A, Hawkins N, Si S, Toni Fet al., 2018, First-line treatments for people with single or multiple brain metastases, Cochrane Database of Systematic Reviews, Vol: 2018

© 2018 The Cochrane Collaboration. This is a protocol for a Cochrane Review (Intervention). The objectives are as follows: To compare the safety and efficacy of surgery, radiotherapy, and chemotherapy as first-line treatment for people with single or multiple brain metastases, either alone or in combination.

Journal article

Cocarascu O, Toni F, 2018, Combining deep learning and argumentative reasoning for the analysis of social media textual content using small datasets, Computational Linguistics, Vol: 44, Pages: 833-858, ISSN: 0891-2017

The use of social media has become a regular habit for many and has changed the way people interact with each other. In this article, we focus on analysing whether news headlines support tweets and whether reviews are deceptive by analysing the interaction or the influence that these texts have on the others, thus exploiting contextual information. Concretely, we define a deep learning method for Relation-based Argument Mining to extract argumentative relations of attack and support. We then use this method for determining whether news articles support tweets, a useful task in fact-checking settings, where determining agreement towards a statement is a useful step towards determining its truthfulness. Furthermore we use our method for extracting Bipolar Argumentation Frameworks from reviews to help detect whether they are deceptive. We show experimentally that our method performs well in both settings. In particular, in the case of deception detection, our method contributes a novel argumentative feature that, when used in combination with other features in standard supervised classifiers, outperforms the latter even on small datasets.

Journal article

Popescu C, Cocarascu O, Toni F, 2018, A platform for crowdsourcing corpora for argumentative, The International Workshop on Dialogue, Explanation and Argumentation in Human-Agent Interaction (DEXAHAI)

One problem that Argument Mining (AM) is facing is the difficultyof obtaining suitable annotated corpora. We propose a web-basedplatform, BookSafari, that allows crowdsourcing of annotated cor-pora forrelation-based AMfrom users providing reviews for booksand exchanging opinions about these reviews to facilitate argumen-tative dialogue. The annotations amount to pairwise argumentativerelations ofattackandsupportbetween opinions and between opin-ions and reviews. As a result of the annotations, reviews and opinionsform structured debates which can be understood as bipolar argu-mentation frameworks. The platform also empowers annotationsof the same pairs by multiple annotators and can support differentmeasures of inter-annotator agreement and corpora selection.

Conference paper

Cocarascu O, Cyras K, Rago A, Toni Fet al., 2018, Explaining with Argumentation Frameworks Mined from Data, The International Workshop on Dialogue, Explanation and Argumentation in Human-Agent Interaction (DEXAHAI)

Conference paper

Hunter A, Maudet N, Toni F, Ouerdane Wet al., 2018, Foreword to the Special Issue on supporting and explaining decision processes by means of argumentation, EURO JOURNAL ON DECISION PROCESSES, Vol: 6, Pages: 235-236, ISSN: 2193-9438

Journal article

Toni F, 2018, Argumentation-based clinical decision support system in ROAD2H, Reasoning with Ambiguous and Conflicting Evidence and Recommendations in Medicine, ISSN: 1613-0073

© 2018 CEUR-WS. All rights reserved. The ROAD2H project aims to build a clinical decision support system integrating argumentation and optimisation techniques to reconcile guidelines providing conflicting recommendations for patients with comorbidities, and taking into account national and regional specificities and constraints imposed by local health ensurance schemes. Here I provide a high-level overview of the project.

Conference paper

Cyras K, Delaney B, Prociuk D, Toni F, Chapman M, Dominguez J, Curcin Vet al., 2018, Argumentation for explainable reasoning with conflicting medical recommendations, Reasoning with Ambiguous and Conflicting Evidence and Recommendations in Medicine (MedRACER 2018), Pages: 14-22

Designing a treatment path for a patient suffering from mul-tiple conditions involves merging and applying multiple clin-ical guidelines and is recognised as a difficult task. This isespecially relevant in the treatment of patients with multiplechronic diseases, such as chronic obstructive pulmonary dis-ease, because of the high risk of any treatment change havingpotentially lethal exacerbations. Clinical guidelines are typi-cally designed to assist a clinician in treating a single condi-tion with no general method for integrating them. Addition-ally, guidelines for different conditions may contain mutuallyconflicting recommendations with certain actions potentiallyleading to adverse effects. Finally, individual patient prefer-ences need to be respected when making decisions.In this work we present a description of an integrated frame-work and a system to execute conflicting clinical guidelinerecommendations by taking into account patient specific in-formation and preferences of various parties. Overall, ourframework combines a patient’s electronic health record datawith clinical guideline representation to obtain personalisedrecommendations, uses computational argumentation tech-niques to resolve conflicts among recommendations while re-specting preferences of various parties involved, if any, andyields conflict-free recommendations that are inspectable andexplainable. The system implementing our framework willallow for continuous learning by taking feedback from thedecision makers and integrating it within its pipeline.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00154121&limit=30&person=true