Imperial College London

ProfessorDanielRueckert

Faculty of EngineeringDepartment of Computing

Professor of Visual Information Processing
 
 
 
//

Contact

 

+44 (0)20 7594 8333d.rueckert Website

 
 
//

Location

 

568Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

1021 results found

Shen C, Roth HR, Hayashi Y, Oda M, Sato G, Miyamoto T, Rueckert D, Mori Ket al., 2024, Anatomical attention can help to segment the dilated pancreatic duct in abdominal CT., Int J Comput Assist Radiol Surg, Vol: 19, Pages: 655-664

PURPOSE: Pancreatic duct dilation is associated with an increased risk of pancreatic cancer, the most lethal malignancy with the lowest 5-year relative survival rate. Automatic segmentation of the dilated pancreatic duct from contrast-enhanced CT scans would facilitate early diagnosis. However, pancreatic duct segmentation poses challenges due to its small anatomical structure and poor contrast in abdominal CT. In this work, we investigate an anatomical attention strategy to address this issue. METHODS: Our proposed anatomical attention strategy consists of two steps: pancreas localization and pancreatic duct segmentation. The coarse pancreatic mask segmentation is used to guide the fully convolutional networks (FCNs) to concentrate on the pancreas' anatomy and disregard unnecessary features. We further apply a multi-scale aggregation scheme to leverage the information from different scales. Moreover, we integrate the tubular structure enhancement as an additional input channel of FCN. RESULTS: We performed extensive experiments on 30 cases of contrast-enhanced abdominal CT volumes. To evaluate the pancreatic duct segmentation performance, we employed four measurements, including the Dice similarity coefficient (DSC), sensitivity, normalized surface distance, and 95 percentile Hausdorff distance. The average DSC achieves 55.7%, surpassing other pancreatic duct segmentation methods on single-phase CT scans only. CONCLUSIONS: We proposed an anatomical attention-based strategy for the dilated pancreatic duct segmentation. Our proposed strategy significantly outperforms earlier approaches. The attention mechanism helps to focus on the pancreas region, while the enhancement of the tubular structure enables FCNs to capture the vessel-like structure. The proposed technique might be applied to other tube-like structure segmentation tasks within targeted anatomies.

Journal article

Meng Q, Bai W, O'Regan DP, Rueckert Det al., 2024, DeepMesh: mesh-based cardiac motion tracking using deep learning, IEEE Transactions on Medical Imaging, Vol: 43, Pages: 1489-1500, ISSN: 0278-0062

3D motion estimation from cine cardiac magnetic resonance (CMR) images is important for the assessment of cardiac function and the diagnosis of cardiovascular diseases. Current state-of-the art methods focus on estimating dense pixel-/voxel-wise motion fields in image space, which ignores the fact that motion estimation is only relevant and useful within the anatomical objects of interest, e.g., the heart. In this work, we model the heart as a 3D mesh consisting of epi- and endocardial surfaces. We propose a novel learning framework, DeepMesh, which propagates a template heart mesh to a subject space and estimates the 3D motion of the heart mesh from CMR images for individual subjects. In DeepMesh, the heart mesh of the end-diastolic frame of an individual subject is first reconstructed from the template mesh. Mesh-based 3D motion fields with respect to the end-diastolic frame are then estimated from 2D short- and long-axis CMR images. By developing a differentiable mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape information from multiple anatomical views for 3D mesh reconstruction and mesh motion estimation. The proposed method estimates vertex-wise displacement and thus maintains vertex correspondences between time frames, which is important for the quantitative assessment of cardiac function across different subjects and populations. We evaluate DeepMesh on CMR images acquired from the UK Biobank. We focus on 3D motion estimation of the left ventricle in this work. Experimental results show that the proposed method quantitatively and qualitatively outperforms other image-based and mesh-based cardiac motion tracking methods.

Journal article

Tayebi Arasteh S, Ziller A, Kuhl C, Makowski M, Nebelung S, Braren R, Rueckert D, Truhn D, Kaissis Get al., 2024, Preserving fairness and diagnostic accuracy in private large-scale AI models for medical imaging., Commun Med (Lond), Vol: 4

BACKGROUND: Artificial intelligence (AI) models are increasingly used in the medical domain. However, as medical data is highly sensitive, special precautions to ensure its protection are required. The gold standard for privacy preservation is the introduction of differential privacy (DP) to model training. Prior work indicates that DP has negative implications on model accuracy and fairness, which are unacceptable in medicine and represent a main barrier to the widespread use of privacy-preserving techniques. In this work, we evaluated the effect of privacy-preserving training of AI models regarding accuracy and fairness compared to non-private training. METHODS: We used two datasets: (1) A large dataset (N = 193,311) of high quality clinical chest radiographs, and (2) a dataset (N = 1625) of 3D abdominal computed tomography (CT) images, with the task of classifying the presence of pancreatic ductal adenocarcinoma (PDAC). Both were retrospectively collected and manually labeled by experienced radiologists. We then compared non-private deep convolutional neural networks (CNNs) and privacy-preserving (DP) models with respect to privacy-utility trade-offs measured as area under the receiver operating characteristic curve (AUROC), and privacy-fairness trade-offs, measured as Pearson's r or Statistical Parity Difference. RESULTS: We find that, while the privacy-preserving training yields lower accuracy, it largely does not amplify discrimination against age, sex or co-morbidity. However, we find an indication that difficult diagnoses and subgroups suffer stronger performance hits in private training. CONCLUSIONS: Our study shows that - under the challenging realistic circumstances of a real-life clinical dataset - the privacy-preserving training of diagnostic deep learning models is possible with excellent diagnostic accuracy and fairness.

Journal article

Wiltgen T, McGinnis J, Schlaeger S, Kofler F, Voon C, Berthele A, Bischl D, Grundl L, Will N, Metz M, Schinz D, Sepp D, Prucker P, Schmitz-Koep B, Zimmer C, Menze B, Rueckert D, Hemmer B, Kirschke J, Mühlau M, Wiestler Bet al., 2024, LST-AI: a Deep Learning Ensemble for Accurate MS Lesion Segmentation., medRxiv

Automated segmentation of brain white matter lesions is crucial for both clinical assessment and scientific research in multiple sclerosis (MS). Over a decade ago, we introduced an engineered lesion segmentation tool, LST. While recent lesion segmentation approaches have leveraged artificial intelligence (AI), they often remain proprietary and difficult to adopt. As an open-source tool, we present LST-AI, an advanced deep learning-based extension of LST that consists of an ensemble of three 3D-UNets. LST-AI explicitly addresses the imbalance between white matter (WM) lesions and non-lesioned WM. It employs a composite loss function incorporating binary cross-entropy and Tversky loss to improve segmentation of the highly heterogeneous MS lesions. We train the network ensemble on 491 MS pairs of T1w and FLAIR images, collected in-house from a 3T MRI scanner, and expert neuroradiologists manually segmented the utilized lesion maps for training. LST-AI additionally includes a lesion location annotation tool, labeling lesion location according to the 2017 McDonald criteria (periventricular, infratentorial, juxtacortical, subcortical). We conduct evaluations on 103 test cases consisting of publicly available data using the Anima segmentation validation tools and compare LST-AI with several publicly available lesion segmentation models. Our empirical analysis shows that LST-AI achieves superior performance compared to existing methods. Its Dice and F1 scores exceeded 0.62, outperforming LST, SAMSEG (Sequence Adaptive Multimodal SEGmentation), and the popular nnUNet framework, which all scored below 0.56. Notably, LST-AI demonstrated exceptional performance on the MSSEG-1 challenge dataset, an international WM lesion segmentation challenge, with a Dice score of 0.65 and an F1 score of 0.63-surpassing all other competing models at the time of the challenge. With increasing lesion volume, the lesion detection rate rapidly increased with a detection rate of >75% for lesions

Journal article

Qiao M, Wang S, Qiu H, Marvao AD, O'Regan D, Rueckert D, Bai Wet al., 2024, CHeart: a conditional spatio-temporal generative model for cardiac anatomy, IEEE Transactions on Medical Imaging, Vol: 43, Pages: 1259-1269, ISSN: 0278-0062

Two key questions in cardiac image analysis are to assess the anatomy and motion of the heart from images; and to understand how they are associated with non-imaging clinical factors such as gender, age and diseases. While the first question can often be addressed by image segmentation and motion tracking algorithms, our capability to model and answer the second question is still limited. In this work, we propose a novel conditional generative model to describe the 4D spatio-temporal anatomy of the heart and its interaction with non-imaging clinical factors. The clinical factors are integrated as the conditions of the generative modelling, which allows us to investigate how these factors influence the cardiac anatomy. We evaluate the model performance in mainly two tasks, anatomical sequence completion and sequence generation. The model achieves high performance in anatomical sequence completion, comparable to or outperforming other state-of-the-art generative models. In terms of sequence generation, given clinical conditions, the model can generate realistic synthetic 4D sequential anatomies that share similar distributions with the real data. We will share the code and the trained generative model at https://github.com/MengyunQ/CHeart.

Journal article

Pan J, Huang W, Rueckert D, Kustner T, Hammernik Ket al., 2024, Reconstruction-driven motion estimation for motion-compensated MR CINE imaging., IEEE Trans Med Imaging, Vol: PP

In cardiac CINE, motion-compensated MR reconstruction (MCMR) is an effective approach to address highly undersampled acquisitions by incorporating motion information between frames. In this work, we propose a novel perspective for addressing the MCMR problem and a more integrated and efficient solution to the MCMR field. Contrary to state-of-the-art (SOTA) MCMR methods which break the original problem into two sub-optimization problems, i.e. motion estimation and reconstruction, we formulate this problem as a single entity with one single optimization. Our approach is unique in that the motion estimation is directly driven by the ultimate goal, reconstruction, but not by the canonical motion-warping loss (similarity measurement between motion-warped images and target images). We align the objectives of motion estimation and reconstruction, eliminating the drawbacks of artifacts-affected motion estimation and therefore error-propagated reconstruction. Further, we can deliver high-quality reconstruction and realistic motion without applying any regularization/smoothness loss terms, circumventing the non-trivial weighting factor tuning. We evaluate our method on two datasets: 1) an in-house acquired 2D CINE dataset for the retrospective study and 2) the public OCMR cardiac dataset for the prospective study. The conducted experiments indicate that the proposed MCMR framework can deliver artifact-free motion estimation and high-quality MR images even for imaging accelerations up to 20x, outperforming SOTA non-MCMR and MCMR methods in both qualitative and quantitative evaluation across all experiments.

Journal article

Spieker V, Eichhorn H, Hammernik K, Rueckert D, Preibisch C, Karampinos DC, Schnabel JAet al., 2024, Deep Learning for Retrospective Motion Correction in MRI: A Comprehensive Review., IEEE Trans Med Imaging, Vol: 43, Pages: 846-859

Motion represents one of the major challenges in magnetic resonance imaging (MRI). Since the MR signal is acquired in frequency space, any motion of the imaged object leads to complex artefacts in the reconstructed image in addition to other MR imaging artefacts. Deep learning has been frequently proposed for motion correction at several stages of the reconstruction process. The wide range of MR acquisition sequences, anatomies and pathologies of interest, and motion patterns (rigid vs. deformable and random vs. regular) makes a comprehensive solution unlikely. To facilitate the transfer of ideas between different applications, this review provides a detailed overview of proposed methods for learning-based motion correction in MRI together with their common challenges and potentials. This review identifies differences and synergies in underlying data usage, architectures, training and evaluation strategies. We critically discuss general trends and outline future directions, with the aim to enhance interaction between different application areas and research fields.

Journal article

Rueckert T, Rueckert D, Palm C, 2024, Methods and datasets for segmentation of minimally invasive surgical instruments in endoscopic images and videos: A review of the state of the art., Comput Biol Med, Vol: 169

In the field of computer- and robot-assisted minimally invasive surgery, enormous progress has been made in recent years based on the recognition of surgical instruments in endoscopic images and videos. In particular, the determination of the position and type of instruments is of great interest. Current work involves both spatial and temporal information, with the idea that predicting the movement of surgical tools over time may improve the quality of the final segmentations. The provision of publicly available datasets has recently encouraged the development of new methods, mainly based on deep learning. In this review, we identify and characterize datasets used for method development and evaluation and quantify their frequency of use in the literature. We further present an overview of the current state of research regarding the segmentation and tracking of minimally invasive surgical instruments in endoscopic images and videos. The paper focuses on methods that work purely visually, without markers of any kind attached to the instruments, considering both single-frame semantic and instance segmentation approaches, as well as those that incorporate temporal information. The publications analyzed were identified through the platforms Google Scholar, Web of Science, and PubMed. The search terms used were "instrument segmentation", "instrument tracking", "surgical tool segmentation", and "surgical tool tracking", resulting in a total of 741 articles published between 01/2015 and 07/2023, of which 123 were included using systematic selection criteria. A discussion of the reviewed literature is provided, highlighting existing shortcomings and emphasizing the available potential for future developments.

Journal article

Küstner T, Hammernik K, Rueckert D, Hepp T, Gatidis Set al., 2024, Predictive uncertainty in deep learning-based MR image reconstruction using deep ensembles: Evaluation on the fastMRI data set., Magn Reson Med

PURPOSE: To estimate pixel-wise predictive uncertainty for deep learning-based MR image reconstruction and to examine the impact of domain shifts and architecture robustness. METHODS: Uncertainty prediction could provide a measure for robustness of deep learning (DL)-based MR image reconstruction from undersampled data. DL methods bear the risk of inducing reconstruction errors like in-painting of unrealistic structures or missing pathologies. These errors may be obscured by visual realism of DL reconstruction and thus remain undiscovered. Furthermore, most methods are task-agnostic and not well calibrated to domain shifts. We propose a strategy that estimates aleatoric (data) and epistemic (model) uncertainty, which entails training a deep ensemble (epistemic) with nonnegative log-likelihood (aleatoric) loss in addition to the conventional applied losses terms. The proposed procedure can be paired with any DL reconstruction, enabling investigations of their predictive uncertainties on a pixel level. Five different architectures were investigated on the fastMRI database. The impact on the examined uncertainty of in-distributional and out-of-distributional data with changes to undersampling pattern, imaging contrast, imaging orientation, anatomy, and pathology were explored. RESULTS: Predictive uncertainty could be captured and showed good correlation to normalized mean squared error. Uncertainty was primarily focused along the aliased anatomies and on hyperintense and hypointense regions. The proposed uncertainty measure was able to detect disease prevalence shifts. Distinct predictive uncertainty patterns were observed for changing network architectures. CONCLUSION: The proposed approach enables aleatoric and epistemic uncertainty prediction for DL-based MR reconstruction with an interpretable examination on a pixel level.

Journal article

Kreitner L, Paetzold JC, Rauch N, Chen C, Hagag AM, Fayed AE, Sivaprasad S, Rausch S, Weichsel J, Menze BH, Harders M, Knier B, Rueckert D, Menten MJet al., 2024, Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations., IEEE Trans Med Imaging, Vol: PP

Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.

Journal article

Tänzer M, Wang F, Qiao M, Bai W, Rueckert D, Yang G, Nielles-Vallespin Set al., 2024, T1/T2 Relaxation Temporal Modelling from Accelerated Acquisitions Using a Latent Transformer, Pages: 293-302, ISBN: 9783031524479

Quantitative cardiac magnetic resonance T1 and T2 mapping enable myocardial tissue characterisation but the lengthy scan times restrict their widespread clinical application. We propose a deep learning method that incorporates a time dependency Latent Transformer module to model relationships between parameterised time frames for improved reconstruction from undersampled data. The module, implemented as a multi-resolution sequence-to-sequence transformer, is integrated into an encoder-decoder architecture to leverage the inherent temporal correlations in relaxation processes. The presented results for accelerated T1 and T2 mapping show the model recovers maps with higher fidelity by explicit incorporation of time dynamics. This work demonstrates the importance of temporal modelling for artifact-free reconstruction in quantitative MRI.

Book chapter

Hinterwimmer F, Serena RS, Wilhelm N, Breden S, Consalvo S, Seidl F, Juestel D, Burgkart RHH, Woertler K, von Eisenhart-Rothe R, Neumann J, Rueckert Det al., 2024, Recommender-based bone tumour classification with radiographs—a link to the past, European Radiology, ISSN: 0938-7994

Objectives: To develop an algorithm to link undiagnosed patients to previous patient histories based on radiographs, and simultaneous classification of multiple bone tumours to enable early and specific diagnosis. Materials and methods: For this retrospective study, data from 2000 to 2021 were curated from our database by two orthopaedic surgeons, a radiologist and a data scientist. Patients with complete clinical and pre-therapy radiographic data were eligible. To ensure feasibility, the ten most frequent primary tumour entities, confirmed histologically or by tumour board decision, were included. We implemented a ResNet and transformer model to establish baseline results. Our method extracts image features using deep learning and then clusters the k most similar images to the target image using a hash-based nearest-neighbour recommender approach that performs simultaneous classification by majority voting. The results were evaluated with precision-at-k, accuracy, precision and recall. Discrete parameters were described by incidence and percentage ratios. For continuous parameters, based on a normality test, respective statistical measures were calculated. Results: Included were data from 809 patients (1792 radiographs; mean age 33.73 ± 18.65, range 3–89 years; 443 men), with Osteochondroma (28.31%) and Ewing sarcoma (1.11%) as the most and least common entities, respectively. The dataset was split into training (80%) and test subsets (20%). For k = 3, our model achieved the highest mean accuracy, precision and recall (92.86%, 92.86% and 34.08%), significantly outperforming state-of-the-art models (54.10%, 55.57%, 19.85% and 62.80%, 61.33%, 23.05%). Conclusion: Our novel approach surpasses current models in tumour classification and links to past patient data, leveraging expert insights. Clinical relevance statement: The proposed algorithm could serve as a vital support tool for clinicians and general practitioners with limited experience in bone tumou

Journal article

Bak M, Madai VI, Celi LA, Kaissis GA, Cornet R, Maris M, Rueckert D, Buyx A, McLennan Set al., 2024, Federated learning is not a cure-all for data ethics, Nature Machine Intelligence

Although federated learning is often seen as a promising solution to allow AI innovation while addressing privacy concerns, we argue that this technology does not fix all underlying data ethics concerns. Benefiting from federated learning in digital health requires acknowledgement of its limitations.

Journal article

Rueckert T, Rieder M, Feussner H, Wilhelm D, Rueckert D, Palm Cet al., 2024, Smoke Classification in Laparoscopic Cholecystectomy Videos Incorporating Spatio-temporal Information, Pages: 298-303, ISSN: 1431-472X

Heavy smoke development represents an important challenge for operating physicians during laparoscopic procedures and can potentially affect the success of an intervention due to reduced visibility and orientation. Reliable and accurate recognition of smoke is therefore a prerequisite for the use of downstream systems such as automated smoke evacuation systems. Current approaches distinguish between non-smoked and smoked frames but often ignore the temporal context inherent in endoscopic video data. In this work, we therefore present a method that utilizes the pixel-wise displacement from randomly sampled images to the preceding frames determined using the optical flow algorithm by providing the transformed magnitude of the displacement as an additional input to the network. Further, we incorporate the temporal context at evaluation time by applying an exponential moving average on the estimated class probabilities of the model output to obtain more stable and robust results over time. We evaluate our method on two convolutional-based and one state-of-the-art transformer architecture and show improvements in the classification results over a baseline approach, regardless of the network used.

Conference paper

Scholz D, Wiestler B, Rueckert D, Menten MJet al., 2024, Metrics to Quantify Global Consistency in Synthetic Medical Images, Pages: 25-34, ISSN: 0302-9743

Image synthesis is increasingly being adopted in medical image processing, for example for data augmentation or inter-modality image translation. In these critical applications, the generated images must fulfill a high standard of biological correctness. A particular requirement for these images is global consistency, i.e an image being overall coherent and structured so that all parts of the image fit together in a realistic and meaningful way. Yet, established image quality metrics do not explicitly quantify this property of synthetic images. In this work, we introduce two metrics that can measure the global consistency of synthetic images on a per-image basis. To measure the global consistency, we presume that a realistic image exhibits consistent properties, e.g., a person’s body fat in a whole-body MRI, throughout the depicted object or scene. Hence, we quantify global consistency by predicting and comparing explicit attributes of images on patches using supervised trained neural networks. Next, we adapt this strategy to an unlabeled setting by measuring the similarity of implicit image features predicted by a self-supervised trained network. Our results demonstrate that predicting explicit attributes of synthetic images on patches can distinguish globally consistent from inconsistent images. Implicit representations of images are less sensitive to assess global consistency but are still serviceable when labeled data is unavailable. Compared to established metrics, such as the FID, our method can explicitly measure global consistency on a per-image basis, enabling a dedicated analysis of the biological plausibility of single synthetic images.

Conference paper

Muffoletto M, Xu H, Xu Y, Williams SE, Williams MC, Kunze KP, Neji R, Niederer SA, Rueckert D, Young AAet al., 2024, Neural Implicit Functions for 3D Shape Reconstruction from Standard Cardiovascular Magnetic Resonance Views, Pages: 130-139, ISSN: 0302-9743

In cardiovascular magnetic resonance (CMR), typical acquisitions often involve a limited number of short and long axis slices. However, reconstructing the 3D chambers is crucial for accurately quantifying heart geometry and assessing cardiac function. Neural Implicit Representations (NIR) learn implicit functions for anatomical shapes from sparse measurements by leveraging a learned continuous shape prior, without the need for high-resolution ground truth data. In this study, we utilized coronary computed tomography (CCTA) images to simulate CMR sparse label maps of two types: standard (10 mm spaced short axis and 2 long axis slices) and 3-slice (single short and 2 long axis slices). Whole heart NIR reconstructions were compared to a Label Completion U-Net (LC-U-Net) network trained on the dense segmentations. The findings indicate that the LC-U-Net is not robust when tested with fewer slices than those used during training. In contrast, the NIR consistently achieved Dice scores above 0.9 for the left ventricle, left ventricle myocardium, and right ventricle labels, irrespective of changes in the training or test set. Predictions from standard views achieved average Dice scores across all labels of 0.84±0.03 and 0.88±0.03, when training on 3-slice and standard data respectively. In conclusion, this study presents promising results for 3D shape reconstruction invariant to slice position and orientation without requiring full resolution training data, offering a robust and accurate method for cardiac chamber reconstruction in CMR.

Conference paper

Haft PT, Huang W, Cruz G, Rueckert D, Zimmer VA, Hammernik Ket al., 2024, Neural Implicit k-space with Trainable Periodic Activation Functions for Cardiac MR Imaging, Pages: 82-87, ISBN: 9783658440367

In MRI reconstruction, neural implicit k-space (NIK) representation maps spatial frequencies to k-space intensity values using an MLP with periodic activation functions. However, the choice of hyperparameters for periodic activation functions is challenging and influences training stability. In this work, we introduce and study the effectiveness of trainable (non-)periodic activation functions for NIK in the context of non-Cartesian Cardiac MRI. Evaluated on 42 radially sampled datasets from 6 subjects, NIKs with the proposed trainable activation functions outperform qualitatively and quantitatively other state-of-the-art reconstruction methods, including NIK with fixed periodic activation functions.

Book chapter

Mueller TT, Usynin D, Paetzold JC, Braren R, Rueckert D, Kaissis Get al., 2024, DIFFERENTIAL PRIVACY GUARANTEES FOR ANALYTICS AND MACHINE LEARNING ON GRAPHS: A SURVEY OF RESULTS, Journal of Privacy and Confidentiality, Vol: 14

We study differential privacy (DP) in the context of graph-structured data and discuss its formulations and applications to the publication of graphs and their associated statistics, graph generation methods, and machine learning on graph-based data, including graph neural networks (GNNs). Interpreting DP guarantees in the context of graphstructured data can be challenging, as individual data points are interconnected (often non-linearly or sparsely). This differentiates graph databases from tabular databases, which are usually used in DP, and complicates related concepts like the derivation of per-sample gradients in GNNs. The problem is exacerbated by an absence of a single, well-established formulation of DP in graph settings. A lack of prior systematisation work motivated us to study graph-based learning from a privacy perspective. In this work, we systematise different formulations of DP on graphs, and discuss challenges and promising applications, including the GNN domain. We compare and separate works into methods that privately estimate graph data (either by statistical analysis or using GNNs), and methods that aim at generating new graph data. We conclude our work with a discussion of open questions and potential directions for further research in this area.

Journal article

Zimmer VA, Hammernik K, Sideri-Lampretsa V, Huang W, Reithmeir A, Rueckert D, Schnabel JAet al., 2024, Towards Generalised Neural Implicit Representations for Image Registration, Pages: 45-55, ISSN: 0302-9743

Neural implicit representations (NIRs) enable to generate and parametrize the transformation for image registration in a continuous way. By design, these representations are image-pair-specific, meaning that for each signal a new multi-layer perceptron has to be trained. In this work, we investigate for the first time the potential of existent NIR generalisation methods for image registration and propose novel methods for the registration of a group of image pairs using NIRs. To exploit the generalisation potential of NIRs, we encode the fixed and moving image volumes to latent representations, which are then used to condition or modulate the NIR. Using ablation studies on a 3D benchmark dataset, we show that our methods are able to generalise to a set of image pairs with a performance comparable to pairwise registration using NIRs when trained on N=10 and N=120 datasets. Our results demonstrate the potential of generalised NIRs for 3D deformable image registration.

Conference paper

Lagogiannis I, Meissen F, Kaissis G, Rueckert Det al., 2024, Unsupervised Pathology Detection: A Deep Dive Into the State of the Art., IEEE Trans Med Imaging, Vol: 43, Pages: 241-252

Deep unsupervised approaches are gathering increased attention for applications such as pathology detection and segmentation in medical images since they promise to alleviate the need for large labeled datasets and are more generalizable than their supervised counterparts in detecting any kind of rare pathology. As the Unsupervised Anomaly Detection (UAD) literature continuously grows and new paradigms emerge, it is vital to continuously evaluate and benchmark new methods in a common framework, in order to reassess the state-of-the-art (SOTA) and identify promising research directions. To this end, we evaluate a diverse selection of cutting-edge UAD methods on multiple medical datasets, comparing them against the established SOTA in UAD for brain MRI. Our experiments demonstrate that newly developed feature-modeling methods from the industrial and medical literature achieve increased performance compared to previous work and set the new SOTA in a variety of modalities and datasets. Additionally, we show that such methods are capable of benefiting from recently developed self-supervised pre-training algorithms, further increasing their performance. Finally, we perform a series of experiments in order to gain further insights into some unique characteristics of selected models and datasets. Our code can be found under https://github.com/iolag/UPD_study/.

Journal article

Pan J, Hamdi M, Huang W, Hammernik K, Kuestner T, Rueckert Det al., 2024, Unrolled and rapid motion-compensated reconstruction for cardiac CINE MRI., Med Image Anal, Vol: 91

In recent years Motion-Compensated MR reconstruction (MCMR) has emerged as a promising approach for cardiac MR (CMR) imaging reconstruction. MCMR estimates cardiac motion and incorporates this information in the reconstruction. However, two obstacles prevent the practical use of MCMR in clinical situations: First, inaccurate motion estimation often leads to inferior CMR reconstruction results. Second, the motion estimation frequently leads to a long processing time for the reconstruction. In this work, we propose a learning-based and unrolled MCMR framework that can perform precise and rapid CMR reconstruction. We achieve accurate reconstruction by developing a joint optimization between the motion estimation and reconstruction, in which a deep learning-based motion estimation framework is unrolled within an iterative optimization procedure. With progressive iterations, a mutually beneficial interaction can be established in which the reconstruction quality is improved with more accurate motion estimation. Further, we propose a groupwise motion estimation framework to speed up the MCMR process. A registration template based on the cardiac sequence average is introduced, while the motion estimation is conducted between the cardiac frames and the template. By applying this framework, cardiac sequence registration can be accomplished with linear time complexity. Experiments on 43 in-house acquired 2D CINE datasets indicate that the proposed unrolled MCMR framework can deliver artifacts-free motion estimation and high-quality CMR reconstruction even for imaging acceleration rates up to 20x. We compare our approach with state-of-the-art reconstruction methods and it outperforms them quantitatively and qualitatively in all adapted metrics across all acceleration rates.

Journal article

Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard N-E, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayash Det al., 2024, Erratum for: How AI May Transform Musculoskeletal Imaging., Radiology, Vol: 310

Journal article

Åkerlund CAI, Holst A, Bhattacharyay S, Stocchetti N, Steyerberg E, Smielewski P, Menon DK, Ercole A, Nelson DW, CENTER-TBI participants and investigatorset al., 2024, Clinical descriptors of disease trajectories in patients with traumatic brain injury in the intensive care unit (CENTER-TBI): a multicentre observational cohort study., Lancet Neurol, Vol: 23, Pages: 71-80

BACKGROUND: Patients with traumatic brain injury are a heterogeneous population, and the most severely injured individuals are often treated in an intensive care unit (ICU). The primary injury at impact, and the harmful secondary events that can occur during the first week of the ICU stay, will affect outcome in this vulnerable group of patients. We aimed to identify clinical variables that might distinguish disease trajectories among patients with traumatic brain injury admitted to the ICU. METHODS: We used data from the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) prospective observational cohort study. We included patients aged 18 years or older with traumatic brain injury who were admitted to the ICU at one of the 65 CENTER-TBI participating centres, which range from large academic hospitals to small rural hospitals. For every patient, we obtained pre-injury data and injury features, clinical characteristics on admission, demographics, physiological parameters, laboratory features, brain biomarkers (ubiquitin carboxy-terminal hydrolase L1 [UCH-L1], S100 calcium-binding protein B [S100B], tau, neurofilament light [NFL], glial fibrillary acidic protein [GFAP], and neuron-specific enolase [NSE]), and information about intracranial pressure lowering treatments during the first 7 days of ICU stay. To identify clinical variables that might distinguish disease trajectories, we applied a novel clustering method to these data, which was based on a mixture of probabilistic graph models with a Markov chain extension. The relation of clusters to the extended Glasgow Outcome Scale (GOS-E) was investigated. FINDINGS: Between Dec 19, 2014, and Dec 17, 2017, 4509 patients with traumatic brain injury were recruited into the CENTER-TBI core dataset, of whom 1728 were eligible for this analysis. Glucose variation (defined as the difference between daily maximum and minimum glucose concentrations) and brain biomarkers (S100B, NSE

Journal article

Guermazi A, Omoumi P, Tordjman M, Fritz J, Kijowski R, Regnard N-E, Carrino J, Kahn CE, Knoll F, Rueckert D, Roemer FW, Hayashi Det al., 2024, How AI May Transform Musculoskeletal Imaging., Radiology, Vol: 310

While musculoskeletal imaging volumes are increasing, there is a relative shortage of subspecialized musculoskeletal radiologists to interpret the studies. Will artificial intelligence (AI) be the solution? For AI to be the solution, the wide implementation of AI-supported data acquisition methods in clinical practice requires establishing trusted and reliable results. This implementation will demand close collaboration between core AI researchers and clinical radiologists. Upon successful clinical implementation, a wide variety of AI-based tools can improve the musculoskeletal radiologist's workflow by triaging imaging examinations, helping with image interpretation, and decreasing the reporting time. Additional AI applications may also be helpful for business, education, and research purposes if successfully integrated into the daily practice of musculoskeletal radiology. The question is not whether AI will replace radiologists, but rather how musculoskeletal radiologists can take advantage of AI to enhance their expert capabilities.

Journal article

Hagag AM, Kaye R, Hoang V, Riedl S, Anders P, Stuart B, Traber G, Appenzeller-Herzog C, Schmidt-Erfurth U, Bogunovic H, Scholl HP, Prevost T, Fritsche L, Rueckert D, Sivaprasad S, Lotery AJet al., 2024, Systematic review of prognostic factors associated with progression to late age-related macular degeneration: Pinnacle study report 2., Surv Ophthalmol, Vol: 69, Pages: 165-172

There is a need to identify accurately prognostic factors that determine the progression of intermediate to late-stage age-related macular degeneration (AMD). Currently, clinicians cannot provide individualised prognoses of disease progression. Moreover, enriching clinical trials with rapid progressors may facilitate delivery of shorter intervention trials aimed at delaying or preventing progression to late AMD. Thus, we performed a systematic review to outline and assess the accuracy of reporting prognostic factors for the progression of intermediate to late AMD. A meta-analysis was originally planned. Synonyms of AMD and disease progression were used to search Medline and EMBASE for articles investigating AMD progression published between 1991 and 2021. Initial search results included 3229 articles. Predetermined eligibility criteria were employed to systematically screen papers by two reviewers working independently and in duplicate. Quality appraisal and data extraction were performed by a team of reviewers. Only 6 studies met the eligibility criteria. Based on these articles, exploratory prognostic factors for progression of intermediate to late AMD included phenotypic features (e.g. location and size of drusen), age, smoking status, ocular and systemic co-morbidities, race, and genotype. Overall, study heterogeneity precluded reporting by forest plots and meta-analysis. The most commonly reported prognostic factors were baseline drusen volume/size, which was associated with progression to neovascular AMD, and outer retinal thinning linked to progression to geographic atrophy. In conclusion, poor methodological quality of included studies warrants cautious interpretation of our findings. Rigorous studies are warranted to provide robust evidence in the future.

Journal article

Tänzer M, Ferreira P, Scott A, Khalique Z, Dwornik M, Rajakulasingam R, de Silva R, Pennell D, Yang G, Rueckert D, Nielles-Vallespin Set al., 2024, Correction to: Faster Diffusion Cardiac MRI with Deep Learning-Based Breath Hold Reduction, Medical Image Understanding and Analysis, Publisher: Springer International Publishing, Pages: C1-C1, ISBN: 9783031120527

Book chapter

Martens E, Haase H-U, Mastella G, Henkel A, Spinner C, Hahn F, Zou C, Fava Sanches A, Allescher J, Heid D, Strauss E, Maier M-M, Lachmann M, Schmidt G, Westphal D, Haufe T, Federle D, Rueckert D, Boeker M, Becker M, Laugwitz K-L, Steger A, Müller Aet al., 2024, Smart hospital: achieving interoperability and raw data collection from medical devices in clinical routine., Front Digit Health, Vol: 6

INTRODUCTION: Today, modern technology is used to diagnose and treat cardiovascular disease. These medical devices provide exact measures and raw data such as imaging data or biosignals. So far, the Broad Integration of These Health Data into Hospital Information Technology Structures-Especially in Germany-is Lacking, and if data integration takes place, only non-Evaluable Findings are Usually Integrated into the Hospital Information Technology Structures. A Comprehensive Integration of raw Data and Structured Medical Information has not yet Been Established. The aim of this project was to design and implement an interoperable database (cardio-vascular-information-system, CVIS) for the automated integration of al medical device data (parameters and raw data) in cardio-vascular medicine. METHODS: The CVIS serves as a data integration and preparation system at the interface between the various devices and the hospital IT infrastructure. In our project, we were able to establish a database with integration of proprietary device interfaces, which could be integrated into the electronic health record (EHR) with various HL7 and web interfaces. RESULTS: In the period between 1.7.2020 and 30.6.2022, the data integrated into this database were evaluated. During this time, 114,858 patients were automatically included in the database and medical data of 50,295 of them were entered. For technical examinations, more than 4.5 million readings (an average of 28.5 per examination) and 684,696 image data and raw signals (28,935 ECG files, 655,761 structured reports, 91,113 x-ray objects, 559,648 ultrasound objects in 54 different examination types, 5,000 endoscopy objects) were integrated into the database. Over 10.2 million bidirectional HL7 messages (approximately 14,000/day) were successfully processed. 98,458 documents were transferred to the central document management system, 55,154 materials (average 7.77 per order) were recorded and stored in the database, 21,196 diagnoses a

Journal article

Marcus A, Bentley P, Rueckert D, 2023, Concurrent ischemic lesion age estimation and segmentation of CT brain using a transformer-based network, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 3463-3473, ISSN: 0278-0062

The cornerstone of stroke care is expedient management that varies depending on the time since stroke onset. Consequently, clinical decision making is centered on accurate knowledge of timing and often requires a radiologist to interpret Computed Tomography (CT) of the brain to confirm the occurrence and age of an event. These tasks are particularly challenging due to the subtle expression of acute ischemic lesions and the dynamic nature of their appearance. Automation efforts have not yet applied deep learning to estimate lesion age and treated these two tasks independently, so, have overlooked their inherent complementary relationship. To leverage this, we propose a novel end-to-end multi-task transformer-based network optimized for concurrent segmentation and age estimation of cerebral ischemic lesions. By utilizing gated positional self-attention and CT-specific data augmentation, the proposed method can capture long-range spatial dependencies while maintaining its ability to be trained from scratch under low-data regimes commonly found in medical imaging. Furthermore, to better combine multiple predictions, we incorporate uncertainty by utilizing quantile loss to facilitate estimating a probability density function of lesion age. The effectiveness of our model is then extensively evaluated on a clinical dataset consisting of 776 CT images from two medical centers. Experimental results demonstrate that our method obtains promising performance, with an area under the curve (AUC) of 0.933 for classifying lesion ages ≤4.5 hours compared to 0.858 using a conventional approach, and outperforms task-specific state-of-the-art algorithms.

Journal article

Hölzl FA, Rueckert D, Kaissis G, 2023, Equivariant Differentially Private Deep Learning: Why DP-SGD Needs Sparser Models, Pages: 11-22

Differentially Private Stochastic Gradient Descent (DP-SGD) limits the amount of private information deep learning models can memorize during training. This is achieved by clipping and adding noise to the model's gradients, and thus networks with more parameters require proportionally stronger perturbation. As a result, large models have difficulties learning useful information, rendering training with DP-SGD exceedingly difficult on more challenging training tasks. Recent research has focused on combating this challenge through training adaptations such as heavy data augmentation and large batch sizes. However, these techniques further increase the computational overhead of DP-SGD and reduce its practical applicability. In this work, we propose using the principle of sparse model design to solve precisely such complex tasks with fewer parameters, higher accuracy, and in less time, thus serving as a promising direction for DP-SGD. We achieve such sparsity by design by introducing equivariant convolutional networks for model training with Differential Privacy. Using equivariant networks, we show that small and efficient architecture design can outperform current state-of-The-Art with substantially lower computational requirements. On CIFAR-10, we achieve an increase of up to 9% in accuracy while reducing the computation time by more than 85%. Our results are a step towards efficient model architectures that make optimal use of their parameters and bridge the privacy-utility gap between private and non-private deep learning for computer vision.

Conference paper

Nasirigerdeh R, Rueckert D, Kaissis G, 2023, Utility-preserving Federated Learning, Pages: 55-65

We investigate the concept of utility-preserving federated learning (UPFL) in the context of deep neural networks. We theoretically prove and experimentally validate that UPFL achieves the same accuracy as centralized training independent of the data distribution across the clients. We demonstrate that UPFL can fully take advantage of the momentum and weight decay techniques compared to centralized training, but it incurs substantial communication overhead. Ordinary federated learning, on the other hand, provides much higher communication efficiency, but it can partially benefit from the aforementioned techniques to improve utility. Given that, we propose a method called weighted gradient accumulation to gain more benefit from the momentum and weight decay akin to UPFL, while providing practical communication efficiency similar to ordinary federated learning.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00172041&limit=30&person=true