Citation

BibTex format

@inproceedings{Lertvittayakumjorn:2019:v1/D19-1523,
author = {Lertvittayakumjorn, P and Toni, F},
doi = {v1/D19-1523},
pages = {5195--5205},
publisher = {ACL Anthology},
title = {Human-grounded evaluations of explanation methods for text classification},
url = {http://dx.doi.org/10.18653/v1/D19-1523},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Due to the black-box nature of deep learning models, methods for explaining the models’ results are crucial to gain trust from humans and support collaboration between AIsand humans. In this paper, we consider several model-agnostic and model-specific explanation methods for CNNs for text classification and conduct three human-grounded evaluations, focusing on different purposes of explanations: (1) revealing model behavior, (2)justifying model predictions, and (3) helping humans investigate uncertain predictions.The results highlight dissimilar qualities of thevarious explanation methods we consider andshow the degree to which these methods couldserve for each purpose.
AU - Lertvittayakumjorn,P
AU - Toni,F
DO - v1/D19-1523
EP - 5205
PB - ACL Anthology
PY - 2019///
SP - 5195
TI - Human-grounded evaluations of explanation methods for text classification
UR - http://dx.doi.org/10.18653/v1/D19-1523
UR - https://www.aclweb.org/anthology/D19-1523
UR - http://hdl.handle.net/10044/1/73206
ER -

Contact us

Artificial Intelligence Network
South Kensington Campus
Imperial College London
SW7 2AZ

To reach the elected speaker of the network, Dr Rossella Arcucci, please contact:

ai-speaker@imperial.ac.uk

To reach the network manager, Diana O'Malley - including to join the network - please contact:

ai-net-manager@imperial.ac.uk