Imperial College London

Dr Ben Glocker

Faculty of EngineeringDepartment of Computing

Professor in Machine Learning for Imaging
 
 
 
//

Contact

 

+44 (0)20 7594 8334b.glocker Website CV

 
 
//

Location

 

377Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Liu:2022:10.1016/S2589-7500(22)00003-6,
author = {Liu, X and Glocker, B and McCradden, MM and Ghassemi, M and Denniston, AK and Oakden-Rayner, L},
doi = {10.1016/S2589-7500(22)00003-6},
journal = {The Lancet Digital Health},
pages = {e384--e397},
title = {The medical algorithmic audit.},
url = {http://dx.doi.org/10.1016/S2589-7500(22)00003-6},
volume = {4},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Artificial intelligence systems for health care, like any other medical device, have the potential to fail. However, specific qualities of artificial intelligence systems, such as the tendency to learn spurious correlates in training data, poor generalisability to new deployment settings, and a paucity of reliable explainability mechanisms, mean they can yield unpredictable errors that might be entirely missed without proactive investigation. We propose a medical algorithmic audit framework that guides the auditor through a process of considering potential algorithmic errors in the context of a clinical task, mapping the components that might contribute to the occurrence of errors, and anticipating their potential consequences. We suggest several approaches for testing algorithmic errors, including exploratory error analysis, subgroup testing, and adversarial testing, and provide examples from our own work and previous studies. The medical algorithmic audit is a tool that can be used to better understand the weaknesses of an artificial intelligence system and put in place mechanisms to mitigate their impact. We propose that safety monitoring and medical algorithmic auditing should be a joint responsibility between users and developers, and encourage the use of feedback mechanisms between these groups to promote learning and maintain safe deployment of artificial intelligence systems.
AU - Liu,X
AU - Glocker,B
AU - McCradden,MM
AU - Ghassemi,M
AU - Denniston,AK
AU - Oakden-Rayner,L
DO - 10.1016/S2589-7500(22)00003-6
EP - 397
PY - 2022///
SN - 2589-7500
SP - 384
TI - The medical algorithmic audit.
T2 - The Lancet Digital Health
UR - http://dx.doi.org/10.1016/S2589-7500(22)00003-6
UR - https://www.ncbi.nlm.nih.gov/pubmed/35396183
UR - https://www.sciencedirect.com/science/article/pii/S2589750022000036?via%3Dihub
UR - http://hdl.handle.net/10044/1/96344
VL - 4
ER -