Imperial College London

ProfessorFrancescaToni

Faculty of EngineeringDepartment of Computing

Professor in Computational Logic
 
 
 
//

Contact

 

+44 (0)20 7594 8228f.toni Website

 
 
//

Location

 

430Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Potyka:2022,
author = {Potyka, N and Yin, X and Toni, F},
pages = {1--8},
publisher = {CEUR Workshop Proceedings},
title = {On the tradeoff between correctness and completeness in argumentative explainable AI},
url = {https://ceur-ws.org/Vol-3209/8151.pdf},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Explainable AI aims at making the decisions of autonomous systems human-understandable. Argumentation frameworks are a natural tool for this purpose. Among them, bipolar abstract argumentation frameworks seem well suited to explain the effect of features on a classification decision and their formal properties can potentially be used to derive formal guarantees for explanations. Two particular interesting properties are correctness (if the explanation says that X affects Y, then X affects Y ) and completeness (if X affects Y, then the explanation says that X affects Y ). The reinforcement property of bipolar argumentation frameworks has been used as a natural correctness counterpart in previous work. Applied to the classification context, it basically states that attacking features should decrease and supporting features should increase the confidence of a classifier. In this short discussion paper, we revisit this idea, discuss potential limitations when considering reinforcement without a corresponding completeness property and how these limitations can potentially be overcome.
AU - Potyka,N
AU - Yin,X
AU - Toni,F
EP - 8
PB - CEUR Workshop Proceedings
PY - 2022///
SN - 1613-0073
SP - 1
TI - On the tradeoff between correctness and completeness in argumentative explainable AI
UR - https://ceur-ws.org/Vol-3209/8151.pdf
UR - http://hdl.handle.net/10044/1/101641
ER -