Imperial College London

DrAntonioRago

Faculty of EngineeringDepartment of Computing

Research Associate
 
 
 
//

Contact

 

a.rago Website

 
 
//

Location

 

429Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@unpublished{Rago:2021,
author = {Rago, A and Albini, E and Baroni, P and Toni, F},
publisher = {arXiv},
title = {Influence-driven explanations for bayesian network classifiers},
url = {http://arxiv.org/abs/2012.05773v2},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - One of the most pressing issues in AI in recent years has been the need toaddress the lack of explainability of many of its models. We focus onexplanations for discrete Bayesian network classifiers (BCs), targeting greatertransparency of their inner workings by including intermediate variables inexplanations, rather than just the input and output variables as is standardpractice. The proposed influence-driven explanations (IDXs) for BCs aresystematically generated using the causal relationships between variableswithin the BC, called influences, which are then categorised by logicalrequirements, called relation properties, according to their behaviour. Theserelation properties both provide guarantees beyond heuristic explanationmethods and allow the information underpinning an explanation to be tailored toa particular context's and user's requirements, e.g., IDXs may be dialecticalor counterfactual. We demonstrate IDXs' capability to explain various forms ofBCs, e.g., naive or multi-label, binary or categorical, and also integraterecent approaches to explanations for BCs from the literature. We evaluate IDXswith theoretical and empirical analyses, demonstrating their considerableadvantages when compared with existing explanation methods.
AU - Rago,A
AU - Albini,E
AU - Baroni,P
AU - Toni,F
PB - arXiv
PY - 2021///
TI - Influence-driven explanations for bayesian network classifiers
UR - http://arxiv.org/abs/2012.05773v2
UR - http://hdl.handle.net/10044/1/86474
ER -