Use the links below to access our reports, or scroll down to use the search function to explore all of our publications including peer-reviewed papers and briefing papers.

Browse all publications

Citation

BibTex format

@article{Sounderajah:2022:10.1038/s41746-021-00544-y,
author = {Sounderajah, V},
doi = {10.1038/s41746-021-00544-y},
journal = {npj Digital Medicine},
pages = {1--13},
title = {Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: a meta-research study},
url = {http://dx.doi.org/10.1038/s41746-021-00544-y},
volume = {5},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Artificial intelligence (AI) centred diagnostic systems are increasingly recognized as robust solutions in healthcare delivery pathways. In turn, there has been a concurrent rise in secondary research studies regarding these technologies in order to influence key clinical and policymaking decisions. It is therefore essential that these studies accurately appraise methodological quality and risk of bias within shortlisted trials and reports. In order to assess whether this critical step is performed, we undertook a meta-research study evaluating adherence to the Quality Assessment of Diagnostic Accuracy Studies 2 (QUADAS-2) tool within AI diagnostic accuracy systematic reviews. A literature search was conducted on all studies published from 2000 to December 2020. Of 50 included reviews, 36 performed quality assessment, of which 27 utilised the QUADAS-2 tool. Bias was reported across all four domains of QUADAS-2. 243 of 423 studies (57.5%) across all systematic reviews utilising QUADAS-2 reported a high or unclear risk of bias in the patient selection domain, 110 (26%) reported a high or unclear risk of bias in the index test domain, 121 (28.6%) in the reference standard domain and 157 (37.1%) in the flow and timing domain. This study demonstrates incomplete uptake of quality assessment tools in reviews of AI-based diagnostic accuracy studies and highlights inconsistent reporting across all domains of quality assessment. Poor standards of reporting act as barriers to clinical implementation. The creation of an AI specific extension for quality assessment tools of diagnostic accuracy AI studies may facilitate the safe translation of AI tools into clinical practice.
AU - Sounderajah,V
DO - 10.1038/s41746-021-00544-y
EP - 13
PY - 2022///
SN - 2398-6352
SP - 1
TI - Quality assessment standards in artificial intelligence diagnostic accuracy systematic reviews: a meta-research study
T2 - npj Digital Medicine
UR - http://dx.doi.org/10.1038/s41746-021-00544-y
UR - https://www.nature.com/articles/s41746-021-00544-y
UR - http://hdl.handle.net/10044/1/93163
VL - 5
ER -