Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Sukpanichnant P, Rago A, Lertvittayakumjorn P, Toni Fet al., 2021,

    LRP-Based Argumentative Explanations for Neural Networks

    , XAI.it, Pages: 71-84, ISSN: 1613-0073

    In recent years, there have been many attempts to combine XAI with the field of symbolic AI in order to generate explanations for neural networks that are more interpretable and better align with human reasoning, with one prominent candidate for this synergy being the sub-field of computational argumentation. One method is to represent neural networks with quantitative bipolar argumentation frameworks (QBAFs) equipped with a particular semantics. The resulting QBAF can then be viewed as an explanation for the associated neural network. In this paper, we explore a novel LRP-based semantics under a new QBAF variant, namely neural QBAFs (nQBAFs). Since an nQBAF of a neural network is typically large, the nQBAF must be simplified before being used as an explanation. Our empirical evaluation indicates that the manner of this simplification is all important for the quality of the resulting explanation.

  • Conference paper
    Rago A, Cocarascu O, Bechlivanidis C, Toni Fet al., 2020,

    Argumentation as a framework for interactive explanations for recommendations

    , KR 2020, 17th International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI, Pages: 805-815, ISSN: 2334-1033

    As AI systems become ever more intertwined in our personallives, the way in which they explain themselves to and inter-act with humans is an increasingly critical research area. Theexplanation of recommendations is, thus a pivotal function-ality in a user’s experience of a recommender system (RS),providing the possibility of enhancing many of its desirablefeatures in addition to itseffectiveness(accuracy wrt users’preferences). For an RS that we prove empirically is effective,we show how argumentative abstractions underpinning rec-ommendations can provide the structural scaffolding for (dif-ferent types of) interactive explanations (IEs), i.e. explana-tions empowering interactions with users. We prove formallythat these IEs empower feedback mechanisms that guaranteethat recommendations will improve with time, hence render-ing the RSscrutable. Finally, we prove experimentally thatthe various forms of IE (tabular, textual and conversational)inducetrustin the recommendations and provide a high de-gree oftransparencyin the RS’s functionality.

  • Conference paper
    Kotonya N, Spooner T, Magazzeni D, Toni Fet al., 2021,

    Graph Reasoning with Context-Aware Linearization for Interpretable Fact Extraction and Verification

    , FEVER 2021
  • Conference paper
    Albini E, Rago A, Baroni P, Toni Fet al., 2021,

    Influence-driven explanations for bayesian network classifiers

    , PRICAI 2021, Publisher: Springer Verlag, ISSN: 0302-9743

    We propose a novel approach to buildinginfluence-driven ex-planations(IDXs) for (discrete) Bayesian network classifiers (BCs). IDXsfeature two main advantages wrt other commonly adopted explanationmethods. First, IDXs may be generated using the (causal) influences between intermediate, in addition to merely input and output, variables within BCs, thus providing adeep, rather than shallow, account of theBCs’ behaviour. Second, IDXs are generated according to a configurable set of properties, specifying which influences between variables count to-wards explanations. Our approach is thusflexible and can be tailored to the requirements of particular contexts or users. Leveraging on this flexibility, we propose novel IDX instances as well as IDX instances cap-turing existing approaches. We demonstrate IDXs’ capability to explainvarious forms of BCs, and assess the advantages of our proposed IDX instances with both theoretical and empirical analyses.

  • Journal article
    Rago A, Cocarascu O, Bechlivanidis C, Lagnado D, Toni Fet al., 2021,

    Argumentative explanations for interactive recommendations

    , Artificial Intelligence, Vol: 296, Pages: 1-22, ISSN: 0004-3702

    A significant challenge for recommender systems (RSs), and in fact for AI systems in general, is the systematic definition of explanations for outputs in such a way that both the explanations and the systems themselves are able to adapt to their human users' needs. In this paper we propose an RS hosting a vast repertoire of explanations, which are customisable to users in their content and format, and thus able to adapt to users' explanatory requirements, while being reasonably effective (proven empirically). Our RS is built on a graphical chassis, allowing the extraction of argumentation scaffolding, from which diverse and varied argumentative explanations for recommendations can be obtained. These recommendations are interactive because they can be questioned by users and they support adaptive feedback mechanisms designed to allow the RS to self-improve (proven theoretically). Finally, we undertake user studies in which we vary the characteristics of the argumentative explanations, showing users' general preferences for more information, but also that their tastes are diverse, thus highlighting the need for our adaptable RS.

  • Conference paper
    Cyras K, Rago A, Emanuele A, Baroni P, Toni Fet al., 2021,

    Argumentative XAI: A Survey

    , The 30th International Joint Conference on Artificial Intelligence (IJCAI-21)
  • Conference paper
    Kotonya N, Toni F, 2020,

    Explainable Automated Fact-Checking: A Survey

    , Barcelona. Spain, 28th International Conference on Computational Linguistics (COLING 2020), Publisher: International Committee on Computational Linguistics, Pages: 5430-5443

    A number of exciting advances have been made in automated fact-checkingthanks to increasingly larger datasets and more powerful systems, leading toimprovements in the complexity of claims which can be accurately fact-checked.However, despite these advances, there are still desirable functionalitiesmissing from the fact-checking pipeline. In this survey, we focus on theexplanation functionality -- that is fact-checking systems providing reasonsfor their predictions. We summarize existing methods for explaining thepredictions of fact-checking systems and we explore trends in this topic.Further, we consider what makes for good explanations in this specific domainthrough a comparative analysis of existing fact-checking explanations againstsome desirable properties. Finally, we propose further research directions forgenerating fact-checking explanations, and describe how these may lead toimprovements in the research area.v

  • Conference paper
    Kotonya N, Toni F, 2020,

    Explainable Automated Fact-Checking for Public Health Claims

    , 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP(1) 2020), Publisher: ACL

    Fact-checking is the task of verifying the veracity of claims by assessing their assertions against credible evidence. The vast major-ity of fact-checking studies focus exclusively on political claims. Very little research explores fact-checking for other topics, specifically subject matters for which expertise is required. We present the first study of explainable fact-checking for claims which require specific expertise. For our case study we choose the setting of public health. To support this case study we construct a new datasetPUBHEALTHof 11.8K claims accompanied by journalist crafted, gold standard explanations(i.e., judgments) to support the fact-check la-bels for claims1. We explore two tasks: veracity prediction and explanation generation. We also define and evaluate, with humans and computationally, three coherence properties of explanation quality. Our results indicate that,by training on in-domain data, gains can be made in explainable, automated fact-checking for claims which require specific expertise.

  • Conference paper
    Cocarascu O, Stylianou A, Cyras K, Toni Fet al., 2020,

    Data-empowered argumentation for dialectically explainable predictions

    , 24th European Conference on Artificial Intelligence (ECAI 2020), Publisher: IOS Press, Pages: 2449-2456

    Today’s AI landscape is permeated by plentiful data anddominated by powerful data-centric methods with the potential toimpact a wide range of human sectors. Yet, in some settings this po-tential is hindered by these data-centric AI methods being mostlyopaque. Considerable efforts are currently being devoted to defin-ing methods for explaining black-box techniques in some settings,while the use of transparent methods is being advocated in others,especially when high-stake decisions are involved, as in healthcareand the practice of law. In this paper we advocate a novel transpar-ent paradigm of Data-Empowered Argumentation (DEAr in short)for dialectically explainable predictions. DEAr relies upon the ex-traction of argumentation debates from data, so that the dialecticaloutcomes of these debates amount to predictions (e.g. classifications)that can be explained dialectically. The argumentation debates con-sist of (data) arguments which may not be linguistic in general butmay nonetheless be deemed to be ‘arguments’ in that they are dialec-tically related, for instance by disagreeing on data labels. We illus-trate and experiment with the DEAr paradigm in three settings, mak-ing use, respectively, of categorical data, (annotated) images and text.We show empirically that DEAr is competitive with another transpar-ent model, namely decision trees (DTs), while also providing natu-rally dialectical explanations.

  • Conference paper
    Lertvittayakumjorn P, Toni F, 2020,

    Human-grounded evaluations of explanation methods for text classification

    , 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Publisher: ACL Anthology, Pages: 5195-5205

    Due to the black-box nature of deep learning models, methods for explaining the models’ results are crucial to gain trust from humans and support collaboration between AIsand humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2)justifying model predictions, and (3) helping humans investigate uncertain predictions.The results highlight dissimilar qualities of thevarious explanation methods we consider andshow the degree to which these methods couldserve for each purpose.

  • Conference paper
    Čyras K, Letsios D, Misener R, Toni F, Cyras K, Letsios D, Misener R, Toni Fet al., 2019,

    Argumentation for explainable scheduling

    , Thirty-Third AAAI Conference on Artificial Intelligence (AAAI-19), Publisher: AAAI, Pages: 2752-2759

    Mathematical optimization offers highly-effective tools for finding solutions for problems with well-defined goals, notably scheduling. However, optimization solvers are often unexplainable black boxes whose solutions are inaccessible to users and which users cannot interact with. We define a novel paradigm using argumentation to empower the interaction between optimization solvers and users, supported by tractable explanations which certify or refute solutions. A solution can be from a solver or of interest to a user (in the context of 'what-if' scenarios). Specifically, we define argumentative and natural language explanations for why a schedule is (not) feasible, (not) efficient or (not) satisfying fixed user decisions, based on models of the fundamental makespan scheduling problem in terms of abstract argumentation frameworks (AFs). We define three types of AFs, whose stable extensions are in one-to-one correspondence with schedules that are feasible, efficient and satisfying fixed decisions, respectively. We extract the argumentative explanations from these AFs and the natural language explanations from the argumentative ones.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1247&limit=30&respub-action=search.html Current Millis: 1642805403715 Current Time: Fri Jan 21 22:50:03 GMT 2022