Imperial College London

DrAntonioRago

Faculty of EngineeringDepartment of Computing

Research Associate
 
 
 
//

Contact

 

a.rago Website

 
 
//

Location

 

429Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inbook{Cocarascu:2020:10.1007/978-3-030-60067-9_3,
author = {Cocarascu, O and Rago, A and Toni, F},
doi = {10.1007/978-3-030-60067-9_3},
pages = {53--84},
title = {Explanation via Machine Arguing},
url = {http://dx.doi.org/10.1007/978-3-030-60067-9_3},
year = {2020}
}

RIS format (EndNote, RefMan)

TY  - CHAP
AB - As AI becomes ever more ubiquitous in our everyday lives, its ability to explain to and interact with humans is evolving into a critical research area. Explainable AI (XAI) has therefore emerged as a popular topic but its research landscape is currently very fragmented. Explanations in the literature have generally been aimed at addressing individual challenges and are often ad-hoc, tailored to specific AIs and/or narrow settings. Further, the extraction of explanations is no simple task; the design of the explanations must be fit for purpose, with considerations including, but not limited to: Is the model or a result being explained? Is the explanation suited to skilled or unskilled explainees? By which means is the information best exhibited? How may users interact with the explanation? As these considerations rise in number, it quickly becomes clear that a systematic way to obtain a variety of explanations for a variety of users and interactions is much needed. In this tutorial we will overview recent approaches showing how these challenges can be addressed by utilising forms of machine arguing as the scaffolding underpinning explanations that are delivered to users. Machine arguing amounts to the deployment of methods from computational argumentation in AI with suitably mined argumentation frameworks, which provide abstractions of “debates”. Computational argumentation has been widely used to support applications requiring information exchange between AI systems and users, facilitated by the fact that the capability of arguing is pervasive in human affairs and arguing is core to a multitude of human activities: humans argue to explain, interact and exchange information. Our lecture will focus on how machine arguing can serve as the driving force of explanations in AI in different ways, namely: by building explainable systems with argumentative foundations from linguistic data focusing on reviews), or by extracting argumentative reasoning from existin
AU - Cocarascu,O
AU - Rago,A
AU - Toni,F
DO - 10.1007/978-3-030-60067-9_3
EP - 84
PY - 2020///
SN - 9783030600662
SP - 53
TI - Explanation via Machine Arguing
UR - http://dx.doi.org/10.1007/978-3-030-60067-9_3
ER -