Results
- Showing results for:
- Reset all filters
Search results
-
Conference paperPeacock D - M, Potyka N, Toni F, et al., 2025,
On the impact of sparsification on quantitative argumentative explanations in neural networks
, 3rd International Workshop on Argumentation for eXplainable AI (ArgXAI@ECAI), Publisher: CEUR Workshop Proceedings, Pages: 20-35, ISSN: 1613-0073Neural Networks (NNs) are powerful decision-making tools, but their lack of explainability limits their use inhigh-stakes domains such as healthcare and criminal justice. The recent SpArX framework sparsifies NNs andmaps them to (weighted) Quantitative Bipolar Argumentation Frameworks (QBAFs) to provide an argumentative understanding of their mechanics. QBAFs can be explained by various quantitative argumentative explanation methods such as Argument Attribution Explanations (AAEs), Relation Attribution Explanations (RAEs), and Contestability Explanations (CEs) - which assign numerical scores to arguments or relations to quantify their influence on the dialectical strength of an argument to be explained. However, it remains unexplored how sparsification of NNs impacts the explanations derived from the corresponding (weighted) QBAFs. In this paper we explore two directions for impact. First, we empirically investigate how varying the sparsification levels of NNs affects the preservation of these explanations: using four datasets (Iris, Diabetes, Cancer, and COMPAS), we find that AAEs are generally well preserved, whereas RAEs are not. Then, for CEs, we find that sparsification canimprove computational efficiency in several cases. Overall, this study offers a preliminary investigation into thepotential synergy between sparsification and explanation methods, opening up new avenues for future research.
-
Conference paperJiang J, Bewley T, Amoukou S, et al., 2025,
Representation consistency for accurate and coherent LLM answer aggregation
, The Thirty-Ninth Annual Conference on Neural Information Processing Systems, Publisher: Neural Information Processing Systems Foundation, Inc. (NeurIPS)Test-time scaling improves large language models' (LLMs) performance by allocating more compute budget during inference. To achieve this, existing methods often require intricate modifications to prompting and sampling strategies. In this work, we introduce representation consistency (RC), a test-time scaling method for aggregating answers drawn from multiple candidate responses of an LLM regardless of how they were generated, including variations in prompt phrasing and sampling strategy. RC enhances answer aggregation by not only considering the number of occurrences of each answer in the candidate response set, but also the consistency of the model's internal activations while generating the set of responses leading to each answer. These activations can be either dense (raw model activations) or sparse (encoded via pretrained sparse autoencoders). Our rationale is that if the model's representations of multiple responses converging on the same answer are highly variable, this answer is more likely to be the result of incoherent reasoning and should be down-weighted during aggregation. Importantly, our method only uses cached activations and lightweight similarity computations and requires no additional model queries. Through experiments with four open-source LLMs and four reasoning datasets, we validate the effectiveness of RC for improving task performance during inference, with consistent accuracy improvements (up to 4%) over strong test-time scaling baselines. We also show that consistency in the sparse activation signals aligns well with the common notion of coherent reasoning.
-
Journal articleLeofante F, Artelt A, Eliades D, et al., 2025,
Explainable AI, energy and critical infrastructure systems
, AI Magazine, Vol: 46, ISSN: 0738-4602The AAAI 2025 Bridge on “Explainable AI, Energy and Critical Infrastructure Systems” was held at the Pennsylvania Convention Centre, Philadelphia, Pennsylvania, USA, on February 25, 2025. The bridge gathered researchers and practitioners, bringing together innovation research across explainable AI, energy and critical infrastructure systems so they can enhance each other. The Bridge featured five keynote presentations by experts, one tutorial, poster presentations by authors who contributed their research findings, and three breakout sessions to discuss new challenges arising at the intersection of these exciting disciplines.
-
Conference paperGigante N, Leofante F, Micheli A, 2025,
Counterfactual scenarios for automated planning
, 22nd International Conference on Principles of Knowledge Representation and Reasoning, Publisher: International Joint Conferences on Artificial Intelligence OrganizationCounterfactual Explanations (CEs) are a powerful technique used to explain Machine Learning models by showing how the input to a model should be minimally changed for the model to produce a different output. Similar proposals havebeen made in the context of Automated Planning, where CEs have been characterised in terms of minimal modifications to an existing plan that would result in the satisfaction of a different goal. While such explanations may help diagnose faults and reason about the characteristics of a plan, they fail to capture higher-level properties of the problem being solved. To address this limitation, we propose a novel explanation paradigm that is based on counterfactual scenarios. In particular, given a planning problem P and an LTLf formula ψ defining desired properties of a plan, counterfactual scenarios identify minimal modifications to P such that it admits plans that comply with ψ. In this paper, we present two qualitative instantiations of counterfactual scenarios based on an explicit quantification over plans that must satisfy ψ. We then characterise the computational complexity of generating such counterfactual scenarios when different types of changes are allowed on P. We show that producing counterfactual scenarios is often only as expensive as computing a plan for P , thus demonstrating the practical viability of our proposal andultimately providing a framework to construct practical algorithms in this area.
-
Journal articleDickie C, Lauren S, Belardinelli F, et al., 2025,
Aggregating bipolar opinions through bipolar assumption-based argumentation
, Autonomous Agents and Multi-Agent Systems, Vol: 39, ISSN: 1387-2532We introduce a novel method to aggregate Bipolar ArgumentationFrameworks expressing opinions of different parties in debates. We use BipolarAssumption-based Argumentation (ABA) as an all-encompassing formalismfor Bipolar Argumentation under different semantics. By leveraging on recentresults on judgement aggregation in Social Choice Theory, we prove severalpreservation results for relevant properties of Bipolar ABA using quota andoligarchic rules. Specifically, we prove (positive and negative) results about thepreservation of conflict-free, closed, admissible, preferred, complete, set-stable,well-founded and ideal extensions in Bipolar ABA, as well as the preservationof acceptability, acyclicity and coherence for individual assumptions. Finally,we illustrate our methodology and results in the context of a case study onopinion aggregation for the treatment of long COVID patients.
-
Conference paperTodd J, Jiang J, Russo A, et al., 2025,
Explainable time series prediction of tyre energy in formula one race strategy
, SAC 2025: The 40th ACM/SIGAPP Symposium On Applied Computing, Publisher: ACMFormula One (F1) race strategy takes place in a high-pressure and fast-paced environment where split-second decisions can drastically affect race results. Two of the core decisions of race strategy are when to make pit stops (i.e. replace the cars’ tyres) and which tyre compounds (hard, medium or soft, in normal conditions) to select. The optimal pit stop decisions can be determined by estimatingthe tyre degradation of these compounds, which in turn can be computed from the energy applied to each tyre, i.e. the tyre energy. In this work, we trained deep learning models, using an F1 team’s historic race data consisting of telemetry, to forecast tyre energies during races. Additionally, we fitted XGBoost, a decision tree-based machine learning algorithm, to the same dataset and compared the results, with both giving impressive performance. Furthermore, weincorporated two different explainable AI methods, namely feature importance and counterfactual explanations, to gain insights into the reasoning behind the forecasts. Our contributions thus result in an explainable, automated method which could assist F1 teams in optimising their race strategy.
-
Conference paperThomas D, Jiang J, Kori A, et al., 2025,
Explainable reinforcement learning for Formula One race strategy
, The 40th ACM/SIGAPP Symposium On Applied Computing, Publisher: ACMIn Formula One, teams compete to develop their cars to achieve the highest possible finishing position in each race. During a race, however, teams are unable to alter the car, so they must improve their cars’ finishing positions via race strategy, i.e. optimising their selection of which tyre compounds to put on the car and when to do so. In this work, we introduce a reinforcement learning model, RSRL(Race Strategy Reinforcement Learning), to control race strategies in simulations, offering a faster alternative to the industry standard of hard-coded and Monte Carlo-based race strategies. Controlling cars with a pace equating to an expected finishing position of P5.5 (where P1 represents first place and P20 is last place), RSRL achieves an average finishing position of P5.33 on our test race, the 2023Bahrain Grand Prix, outperforming the best baseline of P5.63. We then demonstrate, in a generalisability study, how performance for one track or multiple tracks can be prioritised via training. Further, we supplement model predictions with feature importance, decision tree-based surrogate models, and decision tree counterfactualstowards improving user trust in the model. Finally, we provide illustrations which exemplify our approach in real-world situations, drawing parallels between simulations and reality.
-
Conference paperJiang J, Marzari L, Purohit A, et al., 2025,
RobustX: robust counterfactual explanations madeeasy
, International Joint Conference on Artificial Intelligence (IJCAI) 2025, Publisher: IJCAIThe increasing use of Machine Learning (ML) models to aid decision-making in high-stakes industries demands explainability to facilitate trust. Counterfactual Explanations (CEs) are ideally suited for this, as they can offer insights into the predictions of an ML model by illustrating how changes in its input data may lead to different outcomes. However, for CEs to realise their explanatory potential, significant challenges remain in ensuring their robustness under slight changes in the scenario being explained. Despite the widespread recognition of CEs’ robustness as a fundamental requirement, a lack of standardised tools and benchmarks hinders a comprehensive and effective comparison of robust CE generation methods. In this paper, we introduce RobustX, an open-source Python library implementing a collection of CE generation and evaluation methods, with a focus on the robustness property. RobustX provides inter-faces to several existing methods from the literature, enabling streamlined access to state-of-the-art techniques. The library is also easily extensible, allowing fast prototyping of novel robust CE generation and evaluation methods.
-
Conference paperAlfano G, Gould A, Leofante F, et al., 2025,
Counterfactual explanations under model multiplicity and their use in computational argumentation
, International Joint Conference on Artificial Intelligence (IJCAI) 2025, Publisher: IJCAICounterfactual explanations (CXs) are widely recognised as an essential technique for providing recourse recommendations for AI models. However, it is not obvious how to determine CXs in model multiplicity scenarios, where equally performing but different models can be obtained forthe same task. In this paper, we propose novel qualitative and quantitative definitions of CXs based on explicit, nested quantification over (groups) of model decisions. We also study properties of these notions and identify decision problems of interest therefor. While our CXs are broadly applicable, in this paper we instantiate them within computational argumentation where model multiplicity naturally emerges e.g. with incomplete and case-based argumentation frameworks. We then illustrate the suitability of our CXs for model multiplicity in legal and healthcare contexts, before analysing the complexity of the associated decision problems.
-
Conference paperKobialka P, Gerlach L, Leofante F, et al., 2025,
Counterfactual strategies for Markov decision processes
, International Joint Conference on Artificial Intelligence (IJCAI) 2025, Publisher: IJCAI OrganizationCounterfactuals are widely used in AI to explain how minimal changes to a model’s input can cause a different output. However, established methods for computing counterfactuals focus on one-step decision-making, and are not applicable for sequential decision-making. This paper fills this gap by introducing counterfactuals for Markov decision processes (MDPs), i.e., discrete-time Markov models with non-determinism. During MDP execution, a strategy decides which of the enabled actions (with known probabilistic effects) to execute next. Given an initial strategy that reaches an undesired outcome with a probability above some limit, we identify minimal changes to the initial strategyto reduce that probability below the limit. We encode such counterfactual strategies as solutions to non-linear optimization problems, and further extend this encoding to synthesize diverse counterfactual strategies. We evaluate our approach on four real-world datasets and demonstrate its suitability for providing algorithmic recourse in sophisticated sequential decision-making tasks.
-
Conference paperRapberger A, Ulbricht M, Toni F, 2024,
On the correspondence of non-flat assumption-based argumentation and logic programming with negation as failure in the head
, 22nd International Workshop on Nonmonotonic Reasoning NMR 24), Publisher: CEUR Workshop Proceedings, Pages: 122-121, ISSN: 1613-0073The relation between (a fragment of) assumption-based argumentation (ABA) and logic programs (LPs) under stable model semantics is well-studied. However, for obtaining this relation, the ABA framework needs to be restricted to being flat, i.e., a fragment where the (defeasible) assumptions can never be entailed, only assumed to be true or false. Here, we remove this restriction and show a correspondence between non-flat ABA and LPs with negation as failure in their head. We then extend this result to so-called set-stable ABA semantics, originally defined for the fragment of non-flat ABA called bipolar ABA. We showcase how to define set-stable semantics for LPs with negation as failure in their head and show the correspondence to set-stable ABA semantics.
-
Conference paperVasileiou S, Kumar A, Yeoh W, et al., 2024,
Dialectical reconciliation via structured argumentative dialogues
, KR 2024We present a novel framework designed to extend model reconciliation approaches, commonly used in human-aware planning, for enhanced human-AI interaction. By adopting a structured argumentation-based dialogue paradigm, our framework enables dialectical reconciliation to address knowledge discrepancies between an explainer (AI agent) and an explainee (human user), where the goal is for the explainee to understand the explainer's decision. We formally describe the operational semantics of our proposed framework, providing theoretical guarantees. We then evaluate the framework's efficacy ``in the wild'' via computational and human-subject experiments. Our findings suggest that our framework offers a promising direction for fostering effective human-AI interactions in domains where explainability is important.
-
Conference paperBattaglia E, Baroni P, Rago A, et al., 2024,
Integrating user preferences into gradual bipolar argumentation for personalised decision support
, Scalable Uncertainty Management, 16th International Conference (SUM 2024), Publisher: Springer, Pages: 14-28, ISSN: 1611-3349Gradual bipolar argumentation has been shown to be aneffective means for supporting decisions across a number of domains. Individual user preferences can be integrated into the domain knowledge represented by such argumentation frameworks and should be taken into account in order to provide personalised decision support. This howeverrequires the definition of a suitable method to handle user-provided preferences in gradual bipolar argumentation, which has not been considered in previous literature. Towards filling this gap, we develop a conceptual analysis on the role of preferences in argumentation and investigate some basic principles concerning the effects they should have on the evaluation of strength in gradual argumentation semantics. We illustrate an application of our approach in the context of a review aggregation system, which has been enhanced with the ability to produce personalisedoutcomes based on user preferences.
-
Conference paperRago A, Vasileiou SL, Toni F, et al., 2024,
A Methodology for Gradual Semantics for Structured Argumentation under Incomplete Information
, ArXiv -
Journal articleKampik T, Potyka N, Yin X, et al., 2024,
Contribution functions for quantitative bipolar argumentation graphs: a principle-based analysis
, International Journal of Approximate Reasoning, Vol: 173, ISSN: 0888-613XWe present a principle-based analysis of contribution functions for quantitative bipolar argumentation graphs that quantify the contribution of one argument to another. The introduced principles formalise the intuitions underlying different contribution functions as well as expectations one would have regarding the behaviour of contribution functions in general. As none of the covered contribution functions satisfies all principles, our analysis can serve as a tool that enables the selection of the most suitable function based on the requirements of a given use case.
-
Conference paperLehtonen T, Rapberger A, Toni F, et al., 2024,
On computing admissibility in ABA
, 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: IOS Press, Inc., Pages: 121-132Most existing computational tools for assumption-based argumentation (ABA) focus on so-called flat frameworks, disregarding the more general case. Here, we study an instantiation-based approach for reasoning in possibly non-flat ABA. For complete-based semantics, an approach of this kind was recently introduced, based on a semantics-preserving translation between ABA and bipolar argumentation frameworks (BAFs). Admissible semantics, however, require us to consider an extension of BAFs which also makes use of premises of arguments (pBAFs).We explore basic properties of pBAFs which we require as a theoretical underpinning for our proposed instantiation-based solver for non-flat ABA under admissible semantics. As our empirical evaluation shows, depending on the ABA instances, the instantiation-based solver is competitive against an ASP-based approach implemented in the style of state-of-the-art solvers for hard argumentation problems.
-
Conference paperRapberger A, Toni F, 2024,
On the robustness of argumentative explanations
, 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: IOS Press, Inc., Pages: 217-228The field of explainable AI has grown exponentially in recent years.Within this landscape, argumentation frameworks have shown to be helpful ab-stractions of some AI models towards providing explanations thereof. While exist-ing work on argumentative explanations and their properties has focused on staticsettings, we focus on dynamic settings whereby the (AI models underpinning the)argumentation frameworks need to change. Specifically, for a number of notionsof explanations drawn from abstract argumentation frameworks under extension-based semantics, we address the following questions: (1) Are explanations robust toextension-preserving changes, in the sense that they are still valid when the changesdo not modify the extensions? (2) If not, are these explanations pseudo-robust inthat can be tractably updated? In this paper, we frame these questions formally. Weconsider robustness and pseudo-robustness w.r.t. ordinary and strong equivalenceand provide several results for various extension-based semantics.
-
Conference paperAyoobi H, Potyka N, Toni F, 2024,
Argumentative interpretable image classification
, 2nd International Workshop on Argumentation for eXplainable AI co-located with the 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: CEUR Workshop Proceedings, Pages: 3-15, ISSN: 1613-0073We propose ProtoSpArX, a novel interpretable deep neural architecture for image classification in the spirit of prototypical-part-learning as found, e.g. in ProtoPNet. While earlier approaches associate every class with multiple prototypical-parts, ProtoSpArX uses super-prototypes that combine prototypical-parts into single class representations. Furthermore, while earlier approaches use interpretable classification layers, e.g. logistic regression in ProtoPNet, ProtoSpArX improves accuracy with multi-layer perceptronswhile relying upon an interpretable reading thereof based on a form of argumentation. ProtoSpArX is customisable to user cognitive requirements by a process of sparsification of the multi-layer perceptron/argumentation component. Also, as opposed to other prototypical-part-learning approaches,ProtoSpArX can recognise spatial relations between different prototypical-parts that are from various regions in images, similar to how CNNs capture relations between patterns recognized in earlier layers.
-
Conference paperSukpanichnant P, Rapberger A, Toni F, 2024,
PeerArg: argumentative peer review with LLMs
, First International Workshop on Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR 2024)Peer review is an essential process to determine the quality of papers submitted to scientific conferences or journals. However, it is subjective and prone to biases. Several studies have been conducted to apply techniques from NLP to support peer review, but they are based on black-box techniques and their outputs are difficult to interpret and trust. In this paper, we propose a novel pipeline to support and understand the reviewing and decision-making processes of peer review: the PeerArg system combining LLMs with methods from knowledge representation. PeerArg takes in input a set of reviews for a paper and outputs the paper acceptance prediction. We evaluate the performance of the PeerArg pipeline on three different datasets, in comparison with a novel end-2-end LLM that uses few-shot learning to predict paper acceptance given reviews. The results indicate that the end-2-end LLM is capable of predicting paper acceptance from reviews, but a variantof the PeerArg pipeline outperforms this LLM.
-
Conference paperOluokun B, Paulino Passos G, Rago A, et al., 2024,
Predicting Human Judgement in Online Debates with Argumentation
, The 24th International Workshop on Computational Models of Natural Argument (CMNA’24) -
Conference paperYin X, Potyka N, Toni F, 2024,
Applying attribution explanations in truth-discovery quantitative bipolar argumentation frameworks
, 2nd International Workshop on Argumentation for eXplainable AI (ArgXAI) co-located with 10th International Conference on Computational Models of Argument (COMMA 2024), Publisher: CEUR Workshop Proceedings, ISSN: 1613-0073 -
Conference paperYin X, Potyka N, Toni F, 2024,
Explaining arguments’ strength: unveiling the role of attacks and supports
, IJCAI 2024, the 33rd International Joint Conference on Artificial Intelligence, Publisher: International Joint Conferences on Artificial Intelligence, Pages: 3622-3630Quantitatively explaining the strength of arguments under gradual semantics has recently received increasing attention. Specifically, several works in the literature provide quantitative explanations by computing the attribution scores of arguments. These works disregard the importance of attacks and supports, even though they play an essential role when explaining arguments' strength. In this paper, we propose a novel theory of Relation Attribution Explanations (RAEs), adapting Shapley values from game theory to offer fine-grained insights into the role of attacks and supports in quantitative bipolar argumentation towards obtaining the arguments' strength. We show that RAEs satisfy several desirable properties. We also propose a probabilistic algorithm to approximate RAEs efficiently. Finally, we show the application value of RAEs in fraud detection and large language models case studies.
-
Conference paperRusso F, Rapberger A, Toni F, 2024,
Argumentative causal discovery
, The 21st International Conference on Knowledge Representation and Reasoning (KR-2024), Publisher: International Joint Conferences on Artificial Intelligence Organization, Pages: 938-949, ISSN: 2334-1033Causal discovery amounts to unearthing causal relationships amongst features in data.It is a crucial companion to causal inference, necessary to build scientific knowledge without resorting to expensive or impossible randomised control trials.In this paper, we explore how reasoning with symbolic representations can support causal discovery.Specifically, we deploy assumption-based argumentation (ABA), a well-established and powerful knowledge representation formalism, in combination with causality theories, to learn graphs which reflect causal dependencies in the data.We prove that our method exhibits desirable properties, notably that, under natural conditions, it can retrieve ground-truth causal graphs.We also conduct experiments with an implementation of our method in answer set programming (ASP) on four datasets from standard benchmarks in causal discovery, showing that our method compares well against established baselines.
-
Conference paperGould A, Paulino Passos G, Dadhania S, et al., 2024,
Preference-based abstract argumentation for case-based reasoning
, International Conference on Principles of Knowledge Representation and Reasoning, Publisher: IJCAI Organization, Pages: 394-404, ISSN: 2334-1033In the pursuit of enhancing the efficacy and flexibility of interpretable, data-driven classification models, this work introduces a novel incorporation of user-defined preferences with Abstract Argumentation and Case-Based Reasoning (CBR). Specifically, we introduce Preference-Based Abstract Argumentation for Case-Based Reasoning (which we call AA-CBR-P), allowing users to define multiple approaches to compare cases with an ordering that specifies their preference over these comparison approaches. We prove that the model inherently follows these preferences when making predictions and show that previous abstract argumentation for case-based reasoning approaches are insufficient at expressing preferences over constituents of an argument. We then demonstrate how this can be applied to a real-world medical dataset sourced from a clinical trial evaluating differing assessment methods of patients with a primary brain tumour. We show empirically that our approach outperforms other interpretable machine learning models on this dataset.
-
Conference paperFreedman G, Toni F, 2024,
Detecting scientific fraud using argument mining
, ArgMining@ACL2024, Publisher: Association for Computational Linguistics, Pages: 15-28proliferation of fraudulent scientific research in recent years has precipitated a greater interest in more effective methods of detection. There are many varieties of academic fraud, but a particularly challenging type to detect is the use of paper mills and the faking of peer-review. To the best of our knowledge, there have so far been no attempts to automate this process.The complexity of this issue precludes the use of heuristic methods, like pattern-matching techniques, which are employed for other types of fraud. Our proposed method in this paper uses techniques from the Computational Argumentation literature (i.e. argument mining and argument quality evaluation). Our central hypothesis stems from the assumption that articles that have not been subject to the proper level of scrutiny will contain poorly formed and reasoned arguments, relative to legitimately published papers. We use a variety of corpora to test this approach, including a collection of abstracts taken from retracted papers. We show significant improvement compared to a number of baselines, suggesting that this approach merits further investigation.
-
Conference paperLeofante F, Ayoobi H, Dejl A, et al., 2024,
Contestable AI needs Computational Argumentation
, KR 2024, Publisher: KR OrganizationAI has become pervasive in recent years, but state-of-the-art approaches predominantly neglect the need for AI systems to be contestable. Instead, contestability is advocated by AI guidelines (e.g. by the OECD) and regulation of automated decision-making (e.g. GDPR). In this position paper we explore how contestability can be achieved computationally in and for AI. We argue that contestable AI requires dynamic (human-machine and/or machine-machine) explainability and decision-making processes, whereby machines can (i) interact with humans and/or other machines to progressively explain their outputs and/or their reasoning as well as assess grounds for contestation provided by these humans and/or other machines, and (ii) revise their decision-making processes to redress any issues successfully raised during contestation. Given that much of the current AI landscape is tailored to static AIs, the need to accommodate contestability will require a radical rethinking, that, we argue, computational argumentation is ideally suited to support.
-
Conference paperYin X, Potyka N, Toni F, 2024,
CE-QArg: Counterfactual explanations for quantitative bipolar argumentation frameworks
, 21st International Conference on Principles of Knowledge Representation and Reasoning, Publisher: International Joint Conferences on Artificial Intelligence OrganizationThere is a growing interest in understanding arguments’strength in Quantitative Bipolar Argumentation Frameworks(QBAFs). Most existing studies focus on attribution-basedmethods that explain an argument’s strength by assigning importance scores to other arguments but fail to explain how to change the current strength to a desired one. To solve this issue, we introduce counterfactual explanations for QBAFs. We discuss problem variants and propose an iterative algorithm named Counterfactual Explanations for Quantitative bipolar Argumentation frameworks (CE-QArg). CE-QArg can identify valid and cost-effective counterfactual explanations based on two core modules, polarity and priority, which help determine the updating direction and magnitude for each argument, respectively. We discuss some formal properties of our counterfactual explanations and empirically evaluate CE-QArg on randomly generated QBAFs.
-
Conference paperProietti M, Toni F, De Angelis E, 2024,
Learning Brave Assumption-Based Argumentation Frameworks via ASP
, ECAI -
Conference paperMarzari L, Leofante F, Cicalese F, et al., 2024,
Rigorous probabilistic guarantees for robust counterfactual explanations
, 27th European Conference on Artificial Intelligence (ECAI 2024), Publisher: IOS PressWe study the problem of assessing the robustness ofcounterfactual explanations for deep learning models. We focus on plausible model shifts altering model parameters and propose a novel framework to reason about the robustness property in this setting. To motivate our solution, we begin by showing for the first time that computing the robustness of counterfactuals with respect to plausiblemodel shifts is NP-complete. As this (practically) rules out the existence of scalable algorithms for exactly computing robustness, we propose a novel probabilistic approach which is able to provide tight estimates of robustness with strong guarantees while preserving scalability. Remarkably, and differently from existing solutions targetingplausible model shifts, our approach does not impose requirements on the network to be analyzed, thus enabling robustness analysis on a wider range of architectures. Experiments on four binary classification datasets indicate that our method improves the state of the art ingenerating robust explanations, outperforming existing methods on a range of metrics.
-
Conference paperVasileiou SL, Kumar A, Yeoh W, et al., 2024,
DR-HAI: argumentation-based dialectical reconciliation in human-AI interactions
, IJCAI 2023In this paper, we introduce DR-HAI – a novelargumentation-based framework designed to extend model reconciliation approaches, commonlyused in explainable AI planning, for enhancedhuman-AI interaction. By adopting a multi-shotreconciliation paradigm and not assuming a-prioriknowledge of the human user’s model, DR-HAI enables interactive reconciliation to address knowledge discrepancies between an explainer and an explainee. We formally describe the operational semantics of DR-HAI, and provide theoretical guarantees related to termination and success
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
