Imperial College London

ProfessorDanielRueckert

Faculty of EngineeringDepartment of Computing

Professor of Visual Information Processing
 
 
 
//

Contact

 

+44 (0)20 7594 8333d.rueckert Website

 
 
//

Location

 

568Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

1010 results found

Spieker V, Eichhorn H, Hammernik K, Rueckert D, Preibisch C, Karampinos DC, Schnabel JAet al., 2024, Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review., IEEE Trans Med Imaging, Vol: 43, Pages: 846-859

Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.

Journal article

Rueckert T, Rueckert D, Palm C, 2024, Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art., Comput Biol Med, Vol: 169

In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.

Journal article

Küstner T, Hammernik K, Rueckert D, Hepp T, Gatidis Set al., 2024, Predictive uncertainty in deep learning-based MR image reconstruction using deep ensembles: Evaluation on the fastMRI data set., Magn Reson Med

PURPOSE: To estimate pixel-wise predictive uncertainty for deep learning-based MR image reconstruction and to examine the impact of domain shifts and architecture robustness. METHODS: Uncertainty prediction could provide a measure for robustness of deep learning (DL)-based MR image reconstruction from undersampled data. DL methods bear the risk of inducing reconstruction errors like in-painting of unrealistic structures or missing pathologies. These errors may be obscured by visual realism of DL reconstruction and thus remain undiscovered. Furthermore, most methods are task-agnostic and not well calibrated to domain shifts. We propose a strategy that estimates aleatoric (data) and epistemic (model) uncertainty, which entails training a deep ensemble (epistemic) with nonnegative log-likelihood (aleatoric) loss in addition to the conventional applied losses terms. The proposed procedure can be paired with any DL reconstruction, enabling investigations of their predictive uncertainties on a pixel level. Five different architectures were investigated on the fastMRI database. The impact on the examined uncertainty of in-distributional and out-of-distributional data with changes to undersampling pattern, imaging contrast, imaging orientation, anatomy, and pathology were explored. RESULTS: Predictive uncertainty could be captured and showed good correlation to normalized mean squared error. Uncertainty was primarily focused along the aliased anatomies and on hyperintense and hypointense regions. The proposed uncertainty measure was able to detect disease prevalence shifts. Distinct predictive uncertainty patterns were observed for changing network architectures. CONCLUSION: The proposed approach enables aleatoric and epistemic uncertainty prediction for DL-based MR reconstruction with an interpretable examination on a pixel level.

Journal article

Kreitner L, Paetzold JC, Rauch N, Chen C, Hagag AM, Fayed AE, Sivaprasad S, Rausch S, Weichsel J, Menze BH, Harders M, Knier B, Rueckert D, Menten MJet al., 2024, Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations., IEEE Trans Med Imaging, Vol: PP

Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.

Journal article

Mueller TT, Usynin D, Paetzold JC, Braren R, Rueckert D, Kaissis Get al., 2024, DIFFERENTIAL PRIVACY GUARANTEES FOR ANALYTICS AND MACHINE LEARNING ON GRAPHS: A SURVEY OF RESULTS, Journal of Privacy and Confidentiality, Vol: 14

We study differential privacy (DP) in the context of graph-structured data and discuss its formulations and applications to the publication of graphs and their associated statistics, graph generation methods, and machine learning on graph-based data, including graph neural networks (GNNs). Interpreting DP guarantees in the context of graphstructured data can be challenging, as individual data points are interconnected (often non-linearly or sparsely). This differentiates graph databases from tabular databases, which are usually used in DP, and complicates related concepts like the derivation of per-sample gradients in GNNs. The problem is exacerbated by an absence of a single, well-established formulation of DP in graph settings. A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective. In this work, we systematise different formulations of DP on graphs, and discuss challenges and promising applications, including the GNN domain. We compare and separate works into methods that privately estimate graph data (either by statistical analysis or using GNNs), and methods that aim at generating new graph data. We conclude our work with a discussion of open questions and potential directions for further research in this area.

Journal article

Pan J, Hamdi M, Huang W, Hammernik K, Kuestner T, Rueckert Det al., 2024, Unrolled and rapid motion-compensated reconstruction for cardiac CINE MRI., Med Image Anal, Vol: 91

In recent years Motion-Compensated MR reconstruction (MCMR) has emerged as a promising approach for cardiac MR (CMR) imaging reconstruction. MCMR estimates cardiac motion and incorporates this information in the reconstruction. However, two obstacles prevent the practical use of MCMR in clinical situations: First, inaccurate motion estimation often leads to inferior CMR reconstruction results. Second, the motion estimation frequently leads to a long processing time for the reconstruction. In this work, we propose a learning-based and unrolled MCMR framework that can perform precise and rapid CMR reconstruction. We achieve accurate reconstruction by developing a joint optimization between the motion estimation and reconstruction, in which a deep learning-based motion estimation framework is unrolled within an iterative optimization procedure. With progressive iterations, a mutually beneficial interaction can be established in which the reconstruction quality is improved with more accurate motion estimation. Further, we propose a groupwise motion estimation framework to speed up the MCMR process. A registration template based on the cardiac sequence average is introduced, while the motion estimation is conducted between the cardiac frames and the template. By applying this framework, cardiac sequence registration can be accomplished with linear time complexity. Experiments on 43 in-house acquired 2D CINE datasets indicate that the proposed unrolled MCMR framework can deliver artifacts-free motion estimation and high-quality CMR reconstruction even for imaging acceleration rates up to 20x. We compare our approach with state-of-the-art reconstruction methods and it outperforms them quantitatively and qualitatively in all adapted metrics across all acceleration rates.

Journal article

Lagogiannis I, Meissen F, Kaissis G, Rueckert Det al., 2024, Unsupervised Pathology Detection: A Deep Dive Into the State of the Art., IEEE Trans Med Imaging, Vol: 43, Pages: 241-252

Deep unsupervised approaches are gathering increased attention for applications such as pathology detection and segmentation in medical images since they promise to alleviate the need for large labeled datasets and are more generalizable than their supervised counterparts in detecting any kind of rare pathology. As the Unsupervised Anomaly Detection (UAD) literature continuously grows and new paradigms emerge, it is vital to continuously evaluate and benchmark new methods in a common framework, in order to reassess the state-of-the-art (SOTA) and identify promising research directions. To this end, we evaluate a diverse selection of cutting-edge UAD methods on multiple medical datasets, comparing them against the established SOTA in UAD for brain MRI. Our experiments demonstrate that newly developed feature-modeling methods from the industrial and medical literature achieve increased performance compared to previous work and set the new SOTA in a variety of modalities and datasets. Additionally, we show that such methods are capable of benefiting from recently developed self-supervised pre-training algorithms, further increasing their performance. Finally, we perform a series of experiments in order to gain further insights into some unique characteristics of selected models and datasets. Our code can be found under https://github.com/iolag/UPD_study/.

Journal article

Åkerlund CAI, Holst A, Bhattacharyay S, Stocchetti N, Steyerberg E, Smielewski P, Menon DK, Ercole A, Nelson DW, CENTER-TBI participants and investigatorset al., 2024, Clinical descriptors of disease trajectories in patients with traumatic brain injury in the intensive care unit (CENTER-TBI): a multicentre observational cohort study., Lancet Neurol, Vol: 23, Pages: 71-80

BACKGROUND: Patients with traumatic brain injury are a heterogeneous population, and the most severely injured individuals are often treated in an intensive care unit (ICU). The primary injury at impact, and the harmful secondary events that can occur during the first week of the ICU stay, will affect outcome in this vulnerable group of patients. We aimed to identify clinical variables that might distinguish disease trajectories among patients with traumatic brain injury admitted to the ICU. METHODS: We used data from the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) prospective observational cohort study. We included patients aged 18 years or older with traumatic brain injury who were admitted to the ICU at one of the 65 CENTER-TBI participating centres, which range from large academic hospitals to small rural hospitals. For every patient, we obtained pre-injury data and injury features, clinical characteristics on admission, demographics, physiological parameters, laboratory features, brain biomarkers (ubiquitin carboxy-terminal hydrolase L1 [UCH-L1], S100 calcium-binding protein B [S100B], tau, neurofilament light [NFL], glial fibrillary acidic protein [GFAP], and neuron-specific enolase [NSE]), and information about intracranial pressure lowering treatments during the first 7 days of ICU stay. To identify clinical variables that might distinguish disease trajectories, we applied a novel clustering method to these data, which was based on a mixture of probabilistic graph models with a Markov chain extension. The relation of clusters to the extended Glasgow Outcome Scale (GOS-E) was investigated. FINDINGS: Between Dec 19, 2014, and Dec 17, 2017, 4509 patients with traumatic brain injury were recruited into the CENTER-TBI core dataset, of whom 1728 were eligible for this analysis. Glucose variation (defined as the difference between daily maximum and minimum glucose concentrations) and brain biomarkers (S100B, NSE

Journal article

Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard N-E, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi Det al., 2024, How AI May Transform Musculoskeletal Imaging., Radiology, Vol: 310

While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.

Journal article

Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard N-E, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayash Det al., 2024, Erratum for: How AI May Transform Musculoskeletal Imaging., Radiology, Vol: 310

Journal article

Tänzer M, Ferreira P, Scott A, Khalique Z, Dwornik M, Rajakulasingam R, de Silva R, Pennell D, Yang G, Rueckert D, Nielles-Vallespin Set al., 2024, Correction to: Faster Diffusion Cardiac MRI with Deep Learning-Based Breath Hold Reduction, Medical Image Understanding and Analysis, Publisher: Springer International Publishing, Pages: C1-C1, ISBN: 9783031120527

Book chapter

Pan J, Huang W, Rueckert D, Küstner T, Hammernik Ket al., 2024, Reconstruction-driven motion estimation for motion-compensated MR CINE imaging, IEEE Transactions on Medical Imaging, Pages: 1-1, ISSN: 0278-0062

Journal article

Haft PT, Huang W, Cruz G, Rueckert D, Zimmer VA, Hammernik Ket al., 2024, Neural Implicit k-space with Trainable Periodic Activation Functions for Cardiac MR Imaging, Bildverarbeitung für die Medizin 2024, Publisher: Springer Fachmedien Wiesbaden, Pages: 82-87, ISBN: 9783658440367

Book chapter

Tänzer M, Wang F, Qiao M, Bai W, Rueckert D, Yang G, Nielles-Vallespin Set al., 2024, T1/T2 Relaxation Temporal Modelling from Accelerated Acquisitions Using a Latent Transformer, Statistical Atlases and Computational Models of the Heart. Regular and CMRxRecon Challenge Papers, Publisher: Springer Nature Switzerland, Pages: 293-302, ISBN: 9783031524479

Book chapter

Meng Q, Bai W, O'Regan DP, Rueckert Det al., 2023, DeepMesh: mesh-based cardiac motion tracking using deep learning, IEEE Transactions on Medical Imaging, ISSN: 0278-0062

3D motion estimation from cine cardiac magnetic resonance (CMR) images is important for the assessment of cardiac function and the diagnosis of cardiovascular diseases. Current state-of-the art methods focus on estimating dense pixel-/voxel-wise motion fields in image space, which ignores the fact that motion estimation is only relevant and useful within the anatomical objects of interest, e.g., the heart. In this work, we model the heart as a 3D mesh consisting of epi- and endocardial surfaces. We propose a novel learning framework, DeepMesh, which propagates a template heart mesh to a subject space and estimates the 3D motion of the heart mesh from CMR images for individual subjects. In DeepMesh, the heart mesh of the end-diastolic frame of an individual subject is first reconstructed from the template mesh. Mesh-based 3D motion fields with respect to the end-diastolic frame are then estimated from 2D short- and long-axis CMR images. By developing a differentiable mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape information from multiple anatomical views for 3D mesh reconstruction and mesh motion estimation. The proposed method estimates vertex-wise displacement and thus maintains vertex correspondences between time frames, which is important for the quantitative assessment of cardiac function across different subjects and populations. We evaluate DeepMesh on CMR images acquired from the UK Biobank. We focus on 3D motion estimation of the left ventricle in this work. Experimental results show that the proposed method quantitatively and qualitatively outperforms other image-based and mesh-based cardiac motion tracking methods.

Journal article

Marcus A, Bentley P, Rueckert D, 2023, Concurrent ischemic lesion age estimation and segmentation of CT brain using a transformer-based network, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 3463-3473, ISSN: 0278-0062

The cornerstone of stroke care is expedient management that varies depending on the time since stroke onset. Consequently, clinical decision making is centered on accurate knowledge of timing and often requires a radiologist to interpret Computed Tomography (CT) of the brain to confirm the occurrence and age of an event. These tasks are particularly challenging due to the subtle expression of acute ischemic lesions and the dynamic nature of their appearance. Automation efforts have not yet applied deep learning to estimate lesion age and treated these two tasks independently, so, have overlooked their inherent complementary relationship. To leverage this, we propose a novel end-to-end multi-task transformer-based network optimized for concurrent segmentation and age estimation of cerebral ischemic lesions. By utilizing gated positional self-attention and CT-specific data augmentation, the proposed method can capture long-range spatial dependencies while maintaining its ability to be trained from scratch under low-data regimes commonly found in medical imaging. Furthermore, to better combine multiple predictions, we incorporate uncertainty by utilizing quantile loss to facilitate estimating a probability density function of lesion age. The effectiveness of our model is then extensively evaluated on a clinical dataset consisting of 776 CT images from two medical centers. Experimental results demonstrate that our method obtains promising performance, with an area under the curve (AUC) of 0.933 for classifying lesion ages ≤4.5 hours compared to 0.858 using a conventional approach, and outperforms task-specific state-of-the-art algorithms.

Journal article

Hölzl FA, Rueckert D, Kaissis G, 2023, Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models, Pages: 11-22

Differentially Private Stochastic Gradient Descent (DP-SGD) limits the amount of private information deep learning models can memorize during training. This is achieved by clipping and adding noise to the model's gradients, and thus networks with more parameters require proportionally stronger perturbation. As a result, large models have difficulties learning useful information, rendering training with DP-SGD exceedingly difficult on more challenging training tasks. Recent research has focused on combating this challenge through training adaptations such as heavy data augmentation and large batch sizes. However, these techniques further increase the computational overhead of DP-SGD and reduce its practical applicability. In this work, we propose using the principle of sparse model design to solve precisely such complex tasks with fewer parameters, higher accuracy, and in less time, thus serving as a promising direction for DP-SGD. We achieve such sparsity by design by introducing equivariant convolutional networks for model training with Differential Privacy. Using equivariant networks, we show that small and efficient architecture design can outperform current state-of-The-Art with substantially lower computational requirements. On CIFAR-10, we achieve an increase of up to 9% in accuracy while reducing the computation time by more than 85%. Our results are a step towards efficient model architectures that make optimal use of their parameters and bridge the privacy-utility gap between private and non-private deep learning for computer vision.

Conference paper

Nasirigerdeh R, Rueckert D, Kaissis G, 2023, Utility-preserving Federated Learning, Pages: 55-65

We investigate the concept of utility-preserving federated learning (UPFL) in the context of deep neural networks. We theoretically prove and experimentally validate that UPFL achieves the same accuracy as centralized training independent of the data distribution across the clients. We demonstrate that UPFL can fully take advantage of the momentum and weight decay techniques compared to centralized training, but it incurs substantial communication overhead. Ordinary federated learning, on the other hand, provides much higher communication efficiency, but it can partially benefit from the aforementioned techniques to improve utility. Given that, we propose a method called weighted gradient accumulation to gain more benefit from the momentum and weight decay akin to UPFL, while providing practical communication efficiency similar to ordinary federated learning.

Conference paper

Wiltgen T, McGinnis J, Schlaeger S, Voon C, Berthele A, Bischl D, Grundl L, Will N, Metz M, Schinz D, Sepp D, Prucker P, Schmitz-Koep B, Zimmer C, Menze B, Rueckert D, Hemmer B, Kirschke J, Mühlau M, Wiestler Bet al., 2023, LST-AI: a Deep Learning Ensemble for Accurate MS Lesion Segmentation., medRxiv

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced a lesion segmentation tool, LST, engineered with a lesion growth algorithm (LST-LGA). While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. Here, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D-UNets. LST-AI specifically addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1w and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI additionally includes a lesion location annotation tool, labeling lesion location according to the 2017 McDonald criteria (periventricular, infratentorial, juxtacortical, subcortical). We conduct evaluations on 270 test cases -comprising both in-house (n=167) and publicly available data (n=103)-using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.5, outperforming LST-LGA, LST-LPA, SAMSEG, and the popular nnUNet framework, which all scored below 0.45. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection ra

Journal article

Graf R, Schmitt J, Schlaeger S, Möller HK, Sideri-Lampretsa V, Sekuboyina A, Krieg SM, Wiestler B, Menze B, Rueckert D, Kirschke JSet al., 2023, Denoising diffusion-based MRI to CT image translation enables automated spinal segmentation., Eur Radiol Exp, Vol: 7

BACKGROUND: Automated segmentation of spinal magnetic resonance imaging (MRI) plays a vital role both scientifically and clinically. However, accurately delineating posterior spine structures is challenging. METHODS: This retrospective study, approved by the ethical committee, involved translating T1-weighted and T2-weighted images into computed tomography (CT) images in a total of 263 pairs of CT/MR series. Landmark-based registration was performed to align image pairs. We compared two-dimensional (2D) paired - Pix2Pix, denoising diffusion implicit models (DDIM) image mode, DDIM noise mode - and unpaired (SynDiff, contrastive unpaired translation) image-to-image translation using "peak signal-to-noise ratio" as quality measure. A publicly available segmentation network segmented the synthesized CT datasets, and Dice similarity coefficients (DSC) were evaluated on in-house test sets and the "MRSpineSeg Challenge" volumes. The 2D findings were extended to three-dimensional (3D) Pix2Pix and DDIM. RESULTS: 2D paired methods and SynDiff exhibited similar translation performance and DCS on paired data. DDIM image mode achieved the highest image quality. SynDiff, Pix2Pix, and DDIM image mode demonstrated similar DSC (0.77). For craniocaudal axis rotations, at least two landmarks per vertebra were required for registration. The 3D translation outperformed the 2D approach, resulting in improved DSC (0.80) and anatomically accurate segmentations with higher spatial resolution than that of the original MRI series. CONCLUSIONS: Two landmarks per vertebra registration enabled paired image-to-image translation from MRI to CT and outperformed all unpaired approaches. The 3D techniques provided anatomically correct segmentations, avoiding underprediction of small structures like the spinous process. RELEVANCE STATEMENT: This study addresses the unresolved issue of translating spinal MRI to CT, making CT-based tools usable for MRI data. It generates whole spine

Journal article

Qiao M, Wang S, Qiu H, Marvao AD, O'Regan D, Rueckert D, Bai Wet al., 2023, CHeart: a conditional spatio-temporal generative model for cardiac anatomy, IEEE Transactions on Medical Imaging, ISSN: 0278-0062

Two key questions in cardiac image analysis are to assess the anatomy and motion of the heart from images; and to understand how they are associated with non-imaging clinical factors such as gender, age and diseases. While the first question can often be addressed by image segmentation and motion tracking algorithms, our capability to model and answer the second question is still limited. In this work, we propose a novel conditional generative model to describe the 4D spatio-temporal anatomy of the heart and its interaction with non-imaging clinical factors. The clinical factors are integrated as the conditions of the generative modelling, which allows us to investigate how these factors influence the cardiac anatomy. We evaluate the model performance in mainly two tasks, anatomical sequence completion and sequence generation. The model achieves high performance in anatomical sequence completion, comparable to or outperforming other state-of-the-art generative models. In terms of sequence generation, given clinical conditions, the model can generate realistic synthetic 4D sequential anatomies that share similar distributions with the real data. We will share the code and the trained generative model at https://github.com/MengyunQ/CHeart.

Journal article

Leingang O, Riedl S, Mai J, Reiter GS, Faustmann G, Fuchs P, Scholl HPN, Sivaprasad S, Rueckert D, Lotery A, Schmidt-Erfurth U, Bogunović Het al., 2023, Automated deep learning-based AMD detection and staging in real-world OCT datasets (PINNACLE study report 5), Scientific Reports, Vol: 13, ISSN: 2045-2322

Real-world retinal optical coherence tomography (OCT) scans are available in abundance in primary and secondary eye care centres. They contain a wealth of information to be analyzed in retrospective studies. The associated electronic health records alone are often not enough to generate a high-quality dataset for clinical, statistical, and machine learning analysis. We have developed a deep learning-based age-related macular degeneration (AMD) stage classifier, to efficiently identify the first onset of early/intermediate (iAMD), atrophic (GA), and neovascular (nAMD) stage of AMD in retrospective data. We trained a two-stage convolutional neural network to classify macula-centered 3D volumes from Topcon OCT images into 4 classes: Normal, iAMD, GA and nAMD. In the first stage, a 2D ResNet50 is trained to identify the disease categories on the individual OCT B-scans while in the second stage, four smaller models (ResNets) use the concatenated B-scan-wise output from the first stage to classify the entire OCT volume. Classification uncertainty estimates are generated with Monte-Carlo dropout at inference time. The model was trained on a real-world OCT dataset, 3765 scans of 1849 eyes, and extensively evaluated, where it reached an average ROC-AUC of 0.94 in a real-world test set.

Journal article

Buchner JA, Peeken JC, Etzel L, Ezhov I, Mayinger M, Christ SM, Brunner TB, Wittig A, Menze BH, Zimmer C, Meyer B, Guckenberger M, Andratschke N, El Shafie RA, Debus J, Rogers S, Riesterer O, Schulze K, Feldmann HJ, Blanck O, Zamboglou C, Ferentinos K, Bilger A, Grosu AL, Wolff R, Kirschke JS, Eitz KA, Combs SE, Bernhardt D, Rueckert D, Piraud M, Wiestler B, Kofler Fet al., 2023, Identifying core MRI sequences for reliable automatic brain metastasis segmentation., Radiother Oncol, Vol: 188

BACKGROUND: Many automatic approaches to brain tumor segmentation employ multiple magnetic resonance imaging (MRI) sequences. The goal of this project was to compare different combinations of input sequences to determine which MRI sequences are needed for effective automated brain metastasis (BM) segmentation. METHODS: We analyzed preoperative imaging (T1-weighted sequence ± contrast-enhancement (T1/T1-CE), T2-weighted sequence (T2), and T2 fluid-attenuated inversion recovery (T2-FLAIR) sequence) from 339 patients with BMs from seven centers. A baseline 3D U-Net with all four sequences and six U-Nets with plausible sequence combinations (T1-CE, T1, T2-FLAIR, T1-CE + T2-FLAIR, T1-CE + T1 + T2-FLAIR, T1-CE + T1) were trained on 239 patients from two centers and subsequently tested on an external cohort of 100 patients from five centers. RESULTS: The model based on T1-CE alone achieved the best segmentation performance for BM segmentation with a median Dice similarity coefficient (DSC) of 0.96. Models trained without T1-CE performed worse (T1-only: DSC = 0.70 and T2-FLAIR-only: DSC = 0.73). For edema segmentation, models that included both T1-CE and T2-FLAIR performed best (DSC = 0.93), while the remaining four models without simultaneous inclusion of these both sequences reached a median DSC of 0.81-0.89. CONCLUSIONS: A T1-CE-only protocol suffices for the segmentation of BMs. The combination of T1-CE and T2-FLAIR is important for edema segmentation. Missing either T1-CE or T2-FLAIR decreases performance. These findings may improve imaging routines by omitting unnecessary sequences, thus allowing for faster procedures in daily clinical practice while enabling optimal neural network-based target definitions.

Journal article

Raab R, Küderle A, Zakreuskaya A, Stern AD, Klucken J, Kaissis G, Rueckert D, Boll S, Eils R, Wagener H, Eskofier BMet al., 2023, Federated electronic health records for the European Health Data Space., Lancet Digit Health, Vol: 5, Pages: e840-e847

The European Commission's draft for the European Health Data Space (EHDS) aims to empower citizens to access their personal health data and share it with physicians and other health-care providers. It further defines procedures for the secondary use of electronic health data for research and development. Although this planned legislation is undoubtedly a step in the right direction, implementation approaches could potentially result in centralised data silos that pose data privacy and security risks for individuals. To address this concern, we propose federated personal health data spaces, a novel architecture for storing, managing, and sharing personal electronic health records that puts citizens at the centre-both conceptually and technologically. The proposed architecture puts citizens in control by storing personal health data on a combination of personal devices rather than in centralised data silos. We describe how this federated architecture fits within the EHDS and can enable the same features as centralised systems while protecting the privacy of citizens. We further argue that increased privacy and control do not contradict the use of electronic health data for research and development. Instead, data sovereignty and transparency encourage active participation in studies and data sharing. This combination of privacy-by-design and transparent, privacy-preserving data sharing can enable health-care leaders to break the privacy-exploitation barrier, which currently limits the secondary use of health data in many cases.

Journal article

Hagag AM, Kaye R, Hoang V, Riedl S, Anders P, Stuart B, Traber G, Appenzeller-Herzog C, Schmidt-Erfurth U, Bogunovic H, Scholl HP, Prevost T, Fritsche L, Rueckert D, Sivaprasad S, Lotery AJet al., 2023, Systematic review of prognostic factors associated with progression to late age-related macular degeneration: Pinnacle study report 2., Surv Ophthalmol

There is a need to identify accurately prognostic factors that determine the progression of intermediate to late-stage age-related macular degeneration (AMD). Currently, clinicians cannot provide individualised prognoses of disease progression. Moreover, enriching clinical trials with rapid progressors may facilitate delivery of shorter intervention trials aimed at delaying or preventing progression to late AMD. Thus, we performed a systematic review to outline and assess the accuracy of reporting prognostic factors for the progression of intermediate to late AMD. A meta-analysis was originally planned. Synonyms of AMD and disease progression were used to search Medline and EMBASE for articles investigating AMD progression published between 1991 and 2021. Initial search results included 3229 articles. Predetermined eligibility criteria were employed to systematically screen papers by two reviewers working independently and in duplicate. Quality appraisal and data extraction were performed by a team of reviewers. Only 6 studies met the eligibility criteria. Based on these articles, exploratory prognostic factors for progression of intermediate to late AMD included phenotypic features (e.g. location and size of drusen), age, smoking status, ocular and systemic co-morbidities, race, and genotype. Overall, study heterogeneity precluded reporting by forest plots and meta-analysis. The most commonly reported prognostic factors were baseline drusen volume/size, which was associated with progression to neovascular AMD, and outer retinal thinning linked to progression to geographic atrophy. In conclusion, poor methodological quality of included studies warrants cautious interpretation of our findings. Rigorous studies are warranted to provide robust evidence in the future.

Journal article

Marcus A, Bentley P, Rueckert D, 2023, Stroke Outcome and Evolution Prediction from CT Brain Using a Spatiotemporal Diffusion Autoencoder, Machine Learning in Clinical Neuroimaging. MLCN 2023. Lecture Notes in Computer Science, vol 14312.

Journal article

Wright R, Gomez A, Zimmer VA, Toussaint N, Khanal B, Matthew J, Skelton E, Kainz B, Rueckert D, V Hajnal J, Schnabel JAet al., 2023, Fast fetal head compounding from multi-view 3D ultrasound, MEDICAL IMAGE ANALYSIS, Vol: 89, ISSN: 1361-8415

Journal article

Taylor TRP, Menten MJ, Rueckert D, Sivaprasad S, Lotery AJet al., 2023, The role of the retinal vasculature in age-related macular degeneration: a spotlight on OCTA, EYE, ISSN: 0950-222X

Journal article

Cruz G, Hammernik K, Kuestner T, Velasco C, Hua A, Ismail TF, Rueckert D, Botnar RM, Prieto Cet al., 2023, Single-heartbeat cardiac cine imaging via jointly regularized non-rigid motion corrected reconstruction, NMR in Biomedicine, Vol: 36, Pages: 1-16, ISSN: 0952-3480

PURPOSE: Develop a novel approach for 2D breath-hold cardiac cine from a single heartbeat, by combining cardiac motion corrected reconstructions and non-rigidly aligned patch-based regularization. METHODS: Conventional cardiac cine imaging is obtained via motion resolved reconstructions of data acquired over multiple heartbeats. Here, we achieve single-heartbeat cine imaging by incorporating non-rigid cardiac motion correction into the reconstruction of each cardiac phase, in conjunction with a motion-aligned patch-based regularization. The proposed Motion Corrected CINE (MC-CINE) incorporates all acquired data into the reconstruction of each (motion corrected) cardiac phase, resulting in a better posed problem than motion resolved approaches. MC-CINE was compared to iterative SENSE and XD-GRASP in fourteen healthy subjects in terms of image sharpness, reader scoring (1-5 range) and reader ranking (1-9 range) of image quality, and single-slice left ventricular assessment. RESULTS: MC-CINE was significantly superior to both iterative SENSE and XD-GRASP using 20, 2 and 1 heartbeat(s). Iterative SENSE, XD-GRASP and MC-CINE achieved sharpness of 74%, 74% and 82% using 20 heartbeats, and 53%, 66% and 82% with 1 heartbeat, respectively. Corresponding results for reader scores were 4.0, 4.7 and 4.9, with 20 heartbeats, and 1.1, 3.0 and 3.9 with 1 heartbeat. Corresponding results for reader rankings were 5.3, 7.3 and 8.6 with 20 heartbeats, and 1.0, 3.2 and 5.4 with 1 heartbeat. MC-CINE using a single heartbeat presented non-significant differences in image quality to iterative SENSE with 20 heartbeats. MC-CINE and XD-GRASP at one heartbeat both presented a non-significant negative bias of <2% in ejection fraction relative to the reference iterative SENSE. CONCLUSION: The proposed MC-CINE significantly improves image quality relative to iterative SENSE and XD-GRASP, enabling 2D cine from a single heartbeat.

Journal article

Al-Jibury E, King JWD, Guo Y, Lenhard B, Fisher AG, Merkenschlager M, Rueckert Det al., 2023, A deep learning method for replicate-based analysis of chromosome conformation contacts using Siamese neural networks, Nature Communications, Vol: 14, ISSN: 2041-1723

The organisation of the genome in nuclear space is an important frontier of biology. Chromosome conformation capture methods such as Hi-C and Micro-C produce genome-wide chromatin contact maps that provide rich data containing quantitative and qualitative information about genome architecture. Most conventional approaches to genome-wide chromosome conformation capture data are limited to the analysis of pre-defined features, and may therefore miss important biological information. One constraint is that biologically important features can be masked by high levels of technical noise in the data. Here we introduce a replicate-based method for deep learning from chromatin conformation contact maps. Using a Siamese network configuration our approach learns to distinguish technical noise from biological variation and outperforms image similarity metrics across a range of biological systems. The features extracted from Hi-C maps after perturbation of cohesin and CTCF reflect the distinct biological functions of cohesin and CTCF in the formation of domains and boundaries, respectively. The learnt distance metrics are biologically meaningful, as they mirror the density of cohesin and CTCF binding. These properties make our method a powerful tool for the exploration of chromosome conformation capture data, such as Hi-C capture Hi-C, and Micro-C.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00172041&limit=30&person=true