Citation

BibTex format

@article{Coupland:2025:10.1186/s12889-025-22705-4,
author = {Coupland, H and Scheidwasser, N and Katsiferis, A and Davies, M and Flaxman, S and Hulvej, Rod N and Mishra, S and Bhatt, S and Unwin, HJT},
doi = {10.1186/s12889-025-22705-4},
journal = {BMC Public Health},
title = {Exploring the potential and limitations of deep learning and explainable AI for longitudinal life course analysis},
url = {http://dx.doi.org/10.1186/s12889-025-22705-4},
volume = {25},
year = {2025}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - BackgroundUnderstanding the complex interplay between life course exposures, such as adverse childhood experiences and environmental factors, and disease risk is essential for developing effective public health interventions. Traditional epidemiological methods, such as regression models and risk scoring, are limited in their ability to capture the non-linear and temporally dynamic nature of these relationships. Deep learning (DL) and explainable artificial intelligence (XAI) are increasingly applied within healthcare settings to identify influential risk factors and enable personalised interventions. However, significant gaps remain in understanding their utility and limitations, especially for sparse longitudinal life course data and how the influential patterns identified using explainability are linked to underlying causal mechanisms.MethodsWe conducted a controlled simulation study to assess the performance of various state-of-the-art DL architectures including CNNs and (attention-based) RNNs against XGBoost and logistic regression. Input data was simulated to reflect a generic and generalisable scenario with different rules used to generate multiple realistic outcomes based upon epidemiological concepts. Multiple metrics were used to assess model performance in the presence of class imbalance and SHAP values were calculated.ResultsWe find that DL methods can accurately detect dynamic relationships that baseline linear models and tree-based methods cannot. However, there is no one model that consistently outperforms the others across all scenarios. We further identify the superior performance of DL models in handling sparse feature availability over time compared to traditional machine learning approaches. Additionally, we examine the interpretability provided by SHAP values, demonstrating that these explanations often misalign with causal relationships, despite excellent predictive and calibrative performance.ConclusionsThese insights provide a foundation for
AU - Coupland,H
AU - Scheidwasser,N
AU - Katsiferis,A
AU - Davies,M
AU - Flaxman,S
AU - Hulvej,Rod N
AU - Mishra,S
AU - Bhatt,S
AU - Unwin,HJT
DO - 10.1186/s12889-025-22705-4
PY - 2025///
SN - 1471-2458
TI - Exploring the potential and limitations of deep learning and explainable AI for longitudinal life course analysis
T2 - BMC Public Health
UR - http://dx.doi.org/10.1186/s12889-025-22705-4
VL - 25
ER -

Contact us


For any enquiries related to the MRC Centre please contact:

Scientific Manager
Susannah Fisher
mrc.gida@imperial.ac.uk

External Relationships and Communications Manager
Dr Sabine van Elsland
s.van-elsland@imperial.ac.uk