Imperial College London

DrPablo NoelPerez Guzman

Faculty of MedicineSchool of Public Health

Research Postgraduate
 
 
 
//

Contact

 

p.perez-guzman Website

 
 
//

Location

 

G31Norfolk PlaceSt Mary's Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Gaythorpe:2023:10.1016/j.epidem.2023.100676,
author = {Gaythorpe, K and Fitzjohn, R and Hinsley, W and Imai, N and Knock, E and Perez, Guzman P and Djaafara, B and Fraser, K and Baguelin, M and Ferguson, N},
doi = {10.1016/j.epidem.2023.100676},
journal = {Epidemics: the journal of infectious disease dynamics},
title = {Data pipelines in a public health emergency: the human in the machine},
url = {http://dx.doi.org/10.1016/j.epidem.2023.100676},
volume = {43},
year = {2023}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - In an emergency epidemic response, data providers supply data on a best-faith effort to modellers and analysts who are typically the end user of data collected for other primary purposes such as to inform patient care. Thus, modellers who analyse secondary data have limited ability to influence what is captured. During an emergency response, models themselves are often under constant development and require both stability in their data inputs and flexibility to incorporate new inputs as novel data sources become available. This dynamic landscape is challenging to work with. Here we outline a data pipeline used in the ongoing COVID-19 response in the UK that aims to address these issues.A data pipeline is a sequence of steps to carry the raw data through to a processed and useable model input, along with the appropriate metadata and context. In ours, each data type had an individual processing report, designed to produce outputs that could be easily combined and used downstream. Automated checks were in-built and added as new pathologies emerged. These cleaned outputs were collated at different geographic levels to provide standardised datasets. Finally, a human validation step was an essential component of the analysis pathway and permitted more nuanced issues to be captured. This framework allowed the pipeline to grow in complexity and volume and facilitated the diverse range of modelling approaches employed by researchers. Additionally, every report or modelling output could be traced back to the specific data version that informed it ensuring reproducibility of results.Our approach has been used to facilitate fast-paced analysis and has evolved over time. Our framework and its aspirations are applicable to many settings beyond COVID-19 data, for example for other outbreaks such as Ebola, or where routine and regular analyses are required.
AU - Gaythorpe,K
AU - Fitzjohn,R
AU - Hinsley,W
AU - Imai,N
AU - Knock,E
AU - Perez,Guzman P
AU - Djaafara,B
AU - Fraser,K
AU - Baguelin,M
AU - Ferguson,N
DO - 10.1016/j.epidem.2023.100676
PY - 2023///
SN - 1755-4365
TI - Data pipelines in a public health emergency: the human in the machine
T2 - Epidemics: the journal of infectious disease dynamics
UR - http://dx.doi.org/10.1016/j.epidem.2023.100676
UR - http://hdl.handle.net/10044/1/106731
VL - 43
ER -