Imperial College London

ProfessorDanielRueckert

Faculty of EngineeringDepartment of Computing

Head of Department of Computing
 
 
 
//

Contact

 

+44 (0)20 7594 8333d.rueckert Website

 
 
//

Location

 

568Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

698 results found

Bruun M, Frederiksen KS, Rhodius-Meester HFM, Baroni M, Gjerum L, Koikkalainen J, Urhemaa T, Tolonen A, van Gils M, Tong T, Guerrero R, Rueckert D, Dyremose N, Andersen BB, Simonsen AH, Lemstra A, Hallikainen M, Kurl S, Herukka S-K, Remes AM, Waldemar G, Soininen H, Mecocci P, van der Flier WM, Lötjönen J, Hasselbalch SGet al., 2019, Impact of a Clinical Decision Support Tool on Dementia Diagnostics in Memory Clinics: The PredictND Validation Study., Curr Alzheimer Res, Vol: 16, Pages: 91-101

BACKGROUND: Determining the underlying etiology of dementia can be challenging. Computer- based Clinical Decision Support Systems (CDSS) have the potential to provide an objective comparison of data and assist clinicians. OBJECTIVES: To assess the diagnostic impact of a CDSS, the PredictND tool, for differential diagnosis of dementia in memory clinics. METHODS: In this prospective multicenter study, we recruited 779 patients with either subjective cognitive decline (n=252), mild cognitive impairment (n=219) or any type of dementia (n=274) and followed them for minimum 12 months. Based on all available patient baseline data (demographics, neuropsychological tests, cerebrospinal fluid biomarkers, and MRI visual and computed ratings), the PredictND tool provides a comprehensive overview and analysis of the data with a likelihood index for five diagnostic groups; Alzheimer´s disease, vascular dementia, dementia with Lewy bodies, frontotemporal dementia and subjective cognitive decline. At baseline, a clinician defined an etiological diagnosis and confidence in the diagnosis, first without and subsequently with the PredictND tool. The follow-up diagnosis was used as the reference diagnosis. RESULTS: In total, 747 patients completed the follow-up visits (53% female, 69±10 years). The etiological diagnosis changed in 13% of all cases when using the PredictND tool, but the diagnostic accuracy did not change significantly. Confidence in the diagnosis, measured by a visual analogue scale (VAS, 0-100%) increased (ΔVAS=3.0%, p<0.0001), especially in correctly changed diagnoses (ΔVAS=7.2%, p=0.0011). CONCLUSION: Adding the PredictND tool to the diagnostic evaluation affected the diagnosis and increased clinicians' confidence in the diagnosis indicating that CDSSs could aid clinicians in the differential diagnosis of dementia.

Journal article

van Essen TA, den Boogert HF, Cnossen MC, de Ruiter GCW, Haitsma I, Polinder S, Steyerberg EW, Menon D, Maas AIR, Lingsma HF, Peul WC, Cecilia A, Hadie A, Vanni A, Judith A, Krisztina A, Norberto A, Nada A, Lasse A, Azasevac A, Audny A, Anna A, Hilko A, Gerard A, Kaspars A, Philippe A, Luisa AM, Camelia B, Rafael B, Ronald B, Pal B, Ursula B, Romuald B, Ronny B, Francisco Javier B, Bo-Michael B, Antonio B, Remy B, Habib B, Thierry B, Maurizio B, Luigi B, Christopher B, Federico B, Harald B, Erta B, Morten B, Hugo DB, Pierre B, Peter B, Alexandra B, Vibeke B, Joanne B, Camilla B, Andras B, Monika B, Emiliana C, Rosa CM, Peter C, Lozano Guillermo C, Marco C, Elsa C, Carpenter K, Ana M C-L, Francesco C, Giorgio C, Arturo C, Giuseppe C, Maryse C, Mark C, Jonathan C, Lizzie C-K, Johnny C, Jamie CD, Marta C, Amra C, Nicola C, Endre C, Marek C, Claire D-F, Francois D, Pierre D, Helen D, Veronique DK, Francesco DC, Bart D, Godard DRCW, Dula D, Ding S, Diederik D, Abhishek D, Emma D, Jens D, Guy-Loup D, George E, Heiko E, Ari E, Patrick E, Erzsebet E, Martin F, Valery FL, Feng J, Kelly F, Francesca F, Gilles F, Ulderico F, Shirin F, Alex F, Pablo G, Damien G, Dashiell G, Gao G, Karin G, Pradeep G, Alexandre G, Lelde G, Benoit G, Ben G, Jagos G, Pedro GA, Francesca G, Russell GL, Deepak G, Juanita HA, Iain H, Jed HA, Raimund H, Eirik H, Daniel H, Astrid H, Stefan H, Lindsay H, Jilske H, Peter HJ, Kristine HA, Bram J, Stefan J, Mike J, Bojan J, Jiang J-Y, Kelly J, Konstantinos K, Mladen K, Ari K, Maija K, Thomas K, Riku K, Angelos KG, Balint K, Erwin K, Ksenija K, Daniel K, Lars-Owe K, Noemi K, Alfonso L, Linda L, Steven L, Fiona L, Christian L, Rolf L, Valerie L, Jin L, Leon L, Roger L, Hester L, Dirk L, Angels L, Andrew MIR, Stephen M, Marc M, Marek M, Sebastian M, Alex M, Geoffrey M, Didier M, Francisco ML, Costanza M, Armando M, Hugues M, Alessandro M, Julia M, Charles M, Catherine M, Bela M, David M, Tomas M, Cristina M-K, Davide M, Visakh M, Lynnette M, Holger M, Nandeshet al., 2018, Variation in neurosurgical management of traumatic brain injury: A survey in 68 centers participating in the CENTER-TBI study, Acta Neurochirurgica, Vol: 161, Pages: 435-449, ISSN: 0001-6268

BackgroundNeurosurgical management of traumatic brain injury (TBI) is challenging, with only low-quality evidence. We aimed to explore differences in neurosurgical strategies for TBI across Europe.MethodsA survey was sent to 68 centers participating in the Collaborative European Neurotrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. The questionnaire contained 21 questions, including the decision when to operate (or not) on traumatic acute subdural hematoma (ASDH) and intracerebral hematoma (ICH), and when to perform a decompressive craniectomy (DC) in raised intracranial pressure (ICP).ResultsThe survey was completed by 68 centers (100%). On average, 10 neurosurgeons work in each trauma center. In all centers, a neurosurgeon was available within 30 min. Forty percent of responders reported a thickness or volume threshold for evacuation of an ASDH. Most responders (78%) decide on a primary DC in evacuating an ASDH during the operation, when swelling is present. For ICH, 3% would perform an evacuation directly to prevent secondary deterioration and 66% only in case of clinical deterioration. Most respondents (91%) reported to consider a DC for refractory high ICP. The reported cut-off ICP for DC in refractory high ICP, however, differed: 60% uses 25 mmHg, 18% 30 mmHg, and 17% 20 mmHg. Treatment strategies varied substantially between regions, specifically for the threshold for ASDH surgery and DC for refractory raised ICP. Also within center variation was present: 31% reported variation within the hospital for inserting an ICP monitor and 43% for evacuating mass lesions.ConclusionDespite a homogeneous organization, considerable practice variation exists of neurosurgical strategies for TBI in Europe. These results provide an incentive for comparative effectiveness research to determine elements of effective neurosurgical care.

Journal article

Balaban G, Halliday BP, Costa CM, Bai W, Porter B, Rinaldi CA, Plank G, Rueckert D, Prasad SK, Bishop MJet al., 2018, Fibrosis Microstructure Modulates Reentry in Non-ischemic Dilated Cardiomyopathy: Insights From Imaged Guided 2D Computational Modeling, Frontiers in Physiology, Vol: 9, ISSN: 1664-042X

Aims: Patients who present with non-ischemic dilated cardiomyopathy (NIDCM) andenhancement on late gadolinium magnetic resonance imaging (LGE-CMR), are at highrisk of sudden cardiac death (SCD). Further risk stratification of these patients basedon LGE-CMR may be improved through better understanding of fibrosis microstructure.Our aim is to examine variations in fibrosis microstructure based on LGE imaging, andquantify the effect on reentry inducibility and mechanism. Furthermore, we examine therelationship between transmural activation time differences and reentry.Methods and Results: 2D Computational models were created from a single short axisLGE-CMR image, with 401 variations in fibrosis type (interstitial, replacement) and density,as well as presence or absence of reduced conductivity (RC). Transmural activationtimes (TAT) were measured, as well as reentry incidence and mechanism. Reentrieswere inducible above specific density thresholds (0.8, 0.6 for interstitial, replacementfibrosis). RC reduced these thresholds (0.3, 0.4 for interstitial, replacement fibrosis) andincreased reentry incidence (48 no RC vs. 133 with RC). Reentries were classified as rotor,micro-reentry, or macro-reentry and depended on fibrosis micro-structure. Differencesin TAT at coupling intervals 210 and 500ms predicted reentry in the models (sensitivity89%, specificity 93%). A sensitivity analysis of TAT and reentry incidence showed thatthese quantities were robust to small changes in the pacing location.Conclusion: Computational models of fibrosis micro-structure underlying areas ofLGE in NIDCM provide insight into the mechanisms and inducibility of reentry, andtheir dependence upon the type and density of fibrosis. Transmural activation times,measured at the central extent of the scar, can potentially differentiate microstructureswhich support reentry.

Journal article

Duan J, Schlemper J, Bai W, Dawes TJW, Bello G, Biffi C, Doumou G, De Marvao A, O’Regan DP, Rueckert Det al., 2018, Combining Deep Learning and Shape Priors for Bi-Ventricular Segmentation of Volumetric Cardiac Magnetic Resonance Images, MICCAI ShapeMI Workshop, Pages: 258-267, ISSN: 0302-9743

© 2018, Springer Nature Switzerland AG. In this paper, we combine a network-based method with image registration to develop a shape-based bi-ventricular segmentation tool for short-axis cardiac magnetic resonance (CMR) volumetric images. The method first employs a fully convolutional network (FCN) to learn the segmentation task from manually labelled ground truth CMR volumes. However, due to the presence of image artefacts in the training dataset, the resulting FCN segmentation results are often imperfect. As such, we propose a second step to refine the FCN segmentation. This step involves performing a non-rigid registration with multiple high-resolution bi-ventricular atlases, allowing the explicit shape priors to be inferred. We validate the proposed approach on 1831 healthy subjects and 200 subjects with pulmonary hypertension. Numerical experiments on the two datasets demonstrate that our approach is capable of producing accurate, high-resolution and anatomically smooth bi-ventricular models, despite the artefacts in the input CMR volumes.

Conference paper

Dawes T, Simoes Monteiro de Marvao A, Shi W, Rueckert D, Cook S, O'Regan Det al., 2018, Identifying the optimal regional predictor of right ventricular global function: a high resolution 3D cardiac magnetic resonance study, Anaesthesia, Vol: 74, Pages: 312-320, ISSN: 0003-2409

Right ventricular (RV) function has prognostic value in acute, chronic and peri‐operative disease, although the complex RV contractile pattern makes rapid assessment difficult. Several two‐dimensional (2D) regional measures estimate RV function, however the optimal measure is not known. High‐resolution three‐dimensional (3D) cardiac magnetic resonance cine imaging was acquired in 300 healthy volunteers and a computational model of RV motion created. Points where regional function was significantly associated with global function were identified and a 2D, optimised single‐point marker (SPM‐O) of global function developed. This marker was prospectively compared with tricuspid annular plane systolic excursion (TAPSE), septum‐freewall displacement (SFD) and their fractional change (TAPSE‐F, SFD‐F) in a test cohort of 300 patients in the prediction of RV ejection fraction. RV ejection fraction was significantly associated with systolic function in a contiguous 7.3 cm2 patch of the basal RV freewall combining transverse (38%), longitudinal (35%) and circumferential (27%) contraction and coinciding with the four‐chamber view. In the test cohort, all single‐point surrogates correlated with RV ejection fraction (p < 0.010), but correlation (R) was higher for SPM‐O (R = 0.44, p < 0.001) than TAPSE (R = 0.24, p < 0.001) and SFD (R = 0.22, p < 0.001), and non‐significantly higher than TAPSE‐F (R = 0.40, p < 0.001) and SFD‐F (R = 0.43, p < 0.001). SPM‐O explained more of the observed variance in RV ejection fraction (19%) and predicted it more accurately than any other 2D marker (median error 2.8 ml vs 3.6 ml, p < 0.001). We conclude that systolic motion of the basal RV freewall predicts global function more accurately than other 2D estimators. However, no markers summarise 3D contractile patterns, limiting their predictive accuracy.

Journal article

Bozek J, Makropoulos A, Schuh A, Fitzgibbon S, Wright R, Glasser MF, Coalson TS, O'Muircheartaigh J, Hutter J, Price AN, Cordero-Grande L, Teixeira RPAG, Hughes E, Tusor N, Baruteau KP, Rutherford MA, Edwards AD, Hajnal JV, Smith SM, Rueckert D, Jenkinson M, Robinson ECet al., 2018, Construction of a neonatal cortical surface atlas using Multimodal Surface Matching in the Developing Human Connectome Project, NeuroImage, Vol: 179, Pages: 11-29, ISSN: 1053-8119

We propose a method for constructing a spatio-temporal cortical surface atlas of neonatal brains aged between 36 and 44 weeks of post-menstrual age (PMA) at the time of scan. The data were acquired as part of the Developing Human Connectome Project (dHCP), and the constructed surface atlases are publicly available. The method is based on a spherical registration approach: Multimodal Surface Matching (MSM), using cortical folding for driving the alignment. Templates have been generated for the anatomical cortical surface and for the cortical feature maps: sulcal depth, curvature, thickness, T1w/T2w myelin maps and cortical regions. To achieve this, cortical surfaces from 270 infants were first projected onto the sphere. Templates were then generated in two stages: first, a reference space was initialised via affine alignment to a group average adult template. Following this, templates were iteratively refined through repeated alignment of individuals to the template space until the variability of the average feature sets converged. Finally, bias towards the adult reference was removed by applying the inverse of the average affine transformations on the template and de-drifting the template. We used temporal adaptive kernel regression to produce age-dependant atlases for 9 weeks (36-44 weeks PMA). The generated templates capture expected patterns of cortical development including an increase in gyrification as well as an increase in thickness and T1w/T2w myelination with increasing age.

Journal article

Schlemper J, Yang G, Ferreira P, Scott A, McGill LA, Khalique Z, Gorodezky M, Roehl M, Keegan J, Pennell D, Firmin D, Rueckert Det al., 2018, Stochastic deep compressive sensing for the reconstruction of diffusion tensor cardiac MRI, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 11070 LNCS, Pages: 295-303, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. Understanding the structure of the heart at the microscopic scale of cardiomyocytes and their aggregates provides new insights into the mechanisms of heart disease and enables the investigation of effective therapeutics. Diffusion Tensor Cardiac Magnetic Resonance (DT-CMR) is a unique non-invasive technique that can resolve the microscopic structure, organisation, and integrity of the myocardium without the need for exogenous contrast agents. However, this technique suffers from relatively low signal-to-noise ratio (SNR) and frequent signal loss due to respiratory and cardiac motion. Current DT-CMR techniques rely on acquiring and averaging multiple signal acquisitions to improve the SNR. Moreover, in order to mitigate the influence of respiratory movement, patients are required to perform many breath holds which results in prolonged acquisition durations (e.g., ~ 30 min using the existing technology). In this study, we propose a novel cascaded Convolutional Neural Networks (CNN) based compressive sensing (CS) technique and explore its applicability to improve DT-CMR acquisitions. Our simulation based studies have achieved high reconstruction fidelity and good agreement between DT-CMR parameters obtained with the proposed reconstruction and fully sampled ground truth. When compared to other state-of-the-art methods, our proposed deep cascaded CNN method and its stochastic variation demonstrated significant improvements. To the best of our knowledge, this is the first study using deep CNN based CS for the DT-CMR reconstruction. In addition, with relatively straightforward modifications to the acquisition scheme, our method can easily be translated into a method for online, at-the-scanner reconstruction enabling the deployment of accelerated DT-CMR in various clinical applications.

Journal article

Alansary A, Le Folgoc L, Vaillant G, Oktay O, Li Y, Bai W, Passerat-Palmbach J, Guerrero R, Kamnitsas K, Hou B, McDonagh S, Glocker B, Kainz B, Rueckert Det al., 2018, Automatic view planning with multi-scale deep reinforcement learning agents, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, ISSN: 0302-9743

We propose a fully automatic method to find standardizedview planes in 3D image acquisitions. Standard view images are impor-tant in clinical practice as they provide a means to perform biometricmeasurements from similar anatomical regions. These views are often constrained to the native orientation of a 3D image acquisition. Navigating through target anatomy to find the required view plane is tedious and operator-dependent. For this task, we employ a multi-scale reinforcement learning (RL) agent framework and extensively evaluate several DeepQ-Network (DQN) based strategies. RL enables a natural learning paradigm by interaction with the environment, which can be used to mimic experienced operators. We evaluate our results using the distance between the anatomical landmarks and detected planes, and the angles between their normal vector and target. The proposed algorithm is assessed on the mid-sagittal and anterior-posterior commissure planes of brain MRI, and the 4-chamber long-axis plane commonly used in cardiac MRI, achieving accuracy of 1.53mm, 1.98mm and 4.84mm, respectively.

Conference paper

Hou B, Miolane N, Khanal B, Lee M, Alansary A, McDonagh SG, Hajnal JV, Rueckert D, Glocker B, Kainz Bet al., 2018, Computing CNN loss and gradients for pose estimation with Riemannian geometry, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, ISSN: 0302-9743

Pose estimation, i.e. predicting a 3D rigid transformation with respect to a fixed co-ordinate frame in, SE(3), is an omnipresent problem in medical image analysis. Deep learning methods often parameterise poses with a representation that separates rotation and translation.As commonly available frameworks do not provide means to calculate loss on a manifold, regression is usually performed using the L2-norm independently on the rotation’s and the translation’s parameterisations. This is a metric for linear spaces that does not take into account the Lie group structure of SE(3). In this paper, we propose a general Riemannian formulation of the pose estimation problem, and train CNNs directly on SE(3) equipped with a left-invariant Riemannian metric. The loss between the ground truth and predicted pose (elements of the manifold) is calculated as the Riemannian geodesic distance, which couples together the translation and rotation components. Network weights are updated by back-propagating the gradient with respect to the predicted pose on the tangent space of the manifold SE(3). We thoroughly evaluate the effectiveness of our loss function by comparing its performance with popular and most commonly used existing methods, on tasks such as image-based localisation and intensity-based 2D/3D registration. We also show that hyper-parameters, used in our loss function to weight the contribution between rotations andtranslations, can be intrinsically calculated from the dataset to achievegreater performance margins.

Conference paper

Li Y, Khanal B, Hou B, Alansary A, Cerrolaza J, Sinclair M, Matthew J, Gupta C, Knight C, Kainz B, Rueckert Det al., Standard plane detection in 3D fetal ultrasound using an iterative transformation network, 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Publisher: Springer Verlag, ISSN: 0302-9743

Standard scan plane detection in fetal brain ultrasound (US) forms a crucialstep in the assessment of fetal development. In clinical settings, this is doneby manually manoeuvring a 2D probe to the desired scan plane. With the adventof 3D US, the entire fetal brain volume containing these standard planes can beeasily acquired. However, manual standard plane identification in 3D volume islabour-intensive and requires expert knowledge of fetal anatomy. We propose anew Iterative Transformation Network (ITN) for the automatic detection ofstandard planes in 3D volumes. ITN uses a convolutional neural network to learnthe relationship between a 2D plane image and the transformation parametersrequired to move that plane towards the location/orientation of the standardplane in the 3D volume. During inference, the current plane image is passediteratively to the network until it converges to the standard plane location.We explore the effect of using different transformation representations asregression outputs of ITN. Under a multi-task learning framework, we introduceadditional classification probability outputs to the network to act asconfidence measures for the regressed transformation parameters in order tofurther improve the localisation accuracy. When evaluated on 72 US volumes offetal brain, our method achieves an error of 3.83mm/12.7 degrees and3.80mm/12.6 degrees for the transventricular and transcerebellar planesrespectively and takes 0.46s per plane.

Conference paper

Li Y, Alansary A, Cerrolaza J, Khanal B, Sinclair M, Matthew J, Gupta C, Knight C, Kainz B, Rueckert Det al., Fast multiple landmark localisation using a patch-based iterative network, 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Publisher: Springer Verlag, ISSN: 0302-9743

We propose a new Patch-based Iterative Network (PIN) for fast and accuratelandmark localisation in 3D medical volumes. PIN utilises a ConvolutionalNeural Network (CNN) to learn the spatial relationship between an image patchand anatomical landmark positions. During inference, patches are repeatedlypassed to the CNN until the estimated landmark position converges to the truelandmark location. PIN is computationally efficient since the inference stageonly selectively samples a small number of patches in an iterative fashionrather than a dense sampling at every location in the volume. Our approachadopts a multi-task learning framework that combines regression andclassification to improve localisation accuracy. We extend PIN to localisemultiple landmarks by using principal component analysis, which models theglobal anatomical relationships between landmarks. We have evaluated PIN using72 3D ultrasound images from fetal screening examinations. PIN achievesquantitatively an average landmark localisation error of 5.59mm and a runtimeof 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2Dstandard scan planes derived from the predicted landmark locations are visuallysimilar to the clinical ground truth.

Conference paper

Cerrolaza JJ, Li Y, Biffi C, Gomez A, Sinclair M, Matthew J, Knight C, Kainz B, Rueckert Det al., 2018, 3D fetal skull reconstruction from 2DUS via deep conditional generative networks, International Conference on Medical Image Computing and Computer-Assisted Intervention, Pages: 383-391, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. 2D ultrasound (US) is the primary imaging modality in antenatal healthcare. Despite the limitations of traditional 2D biometrics to characterize the true 3D anatomy of the fetus, the adoption of 3DUS is still very limited. This is particularly significant in developing countries and remote areas, due to the lack of experienced sonographers and the limited access to 3D technology. In this paper, we present a new deep conditional generative network for the 3D reconstruction of the fetal skull from 2DUS standard planes of the head routinely acquired during the fetal screening process. Based on the generative properties of conditional variational autoencoders (CVAE), our reconstruction architecture (REC-CVAE) directly integrates the three US standard planes as conditional variables to generate a unified latent space of the skull. Additionally, we propose HiREC-CVAE, a hierarchical generative network based on the different clinical relevance of each predictive view. The hierarchical structure of HiREC-CVAE allows the network to learn a sequence of nested latent spaces, providing superior predictive capabilities even in the absence of some of the 2DUS scans. The performance of the proposed architectures was evaluated on a dataset of 72 cases, showing accurate reconstruction capabilities from standard non-registered 2DUS images.

Conference paper

Seitzer M, Yang G, Schlemper J, Oktay O, Würfl T, Christlein V, Wong T, Mohiaddin R, Firmin D, Keegan J, Rueckert D, Maier Aet al., 2018, Adversarial and perceptual refinement for compressed sensing MRI reconstruction, 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2018), Pages: 232-240, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. Deep learning approaches have shown promising performance for compressed sensing-based Magnetic Resonance Imaging. While deep neural networks trained with mean squared error (MSE) loss functions can achieve high peak signal to noise ratio, the reconstructed images are often blurry and lack sharp details, especially for higher undersampling rates. Recently, adversarial and perceptual loss functions have been shown to achieve more visually appealing results. However, it remains an open question how to (1) optimally combine these loss functions with the MSE loss function and (2) evaluate such a perceptual enhancement. In this work, we propose a hybrid method, in which a visual refinement component is learnt on top of an MSE loss-based reconstruction network. In addition, we introduce a semantic interpretability score, measuring the visibility of the region of interest in both ground truth and reconstructed images, which allows us to objectively quantify the usefulness of the image quality for image post-processing and analysis. Applied on a large cardiac MRI dataset simulated with 8-fold undersampling, we demonstrate significant improvements (p<0.01) over the state-of-the-art in both a human observer study and the semantic interpretability score.

Conference paper

Qin C, Bai W, Schlemper J, Petersen SE, Piechnik SK, Neubauer S, Rueckert Det al., 2018, Joint motion estimation and segmentation from undersampled cardiac mr image, International Conference On Medical Image Computing & Computer Assisted Intervention, Pages: 55-63, ISSN: 0302-9743

© 2018, Springer Nature Switzerland AG. Accelerating the acquisition of magnetic resonance imaging (MRI) is a challenging problem, and many works have been proposed to reconstruct images from undersampled k-space data. However, if the main purpose is to extract certain quantitative measures from the images, perfect reconstructions may not always be necessary as long as the images enable the means of extracting the clinically relevant measures. In this paper, we work on jointly predicting cardiac motion estimation and segmentation directly from undersampled data, which are two important steps in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In particular, a unified model consisting of both motion estimation branch and segmentation branch is learned by optimising the two tasks simultaneously. Additional corresponding fully-sampled images are incorporated into the network as a parallel sub-network to enhance and guide the learning during the training process. Experimental results using cardiac MR images from 220 subjects show that the proposed model is robust to undersampled data and is capable of predicting results that are close to that from fully-sampled ones, while bypassing the usual image reconstruction stage.

Conference paper

Qin C, Bai W, Schlemper J, Petersen SE, Piechnik SK, Neubauer S, Rueckert Det al., 2018, Joint learning of motion estimation and segmentation for cardiac MR image sequences, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 11071 LNCS, Pages: 472-480, ISSN: 0302-9743

Cardiac motion estimation and segmentation play important roles in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In this paper, we propose a novel deep learning method for joint estimation of motion and segmentation from cardiac MR image sequences. The proposed network consists of two branches: a cardiac motion estimation branch which is built on a novel unsupervised Siamese style recurrent spatial transformer network, and a cardiac segmentation branch that is based on a fully convolutional network. In particular, a joint multi-scale feature encoder is learned by optimizing the segmentation branch and the motion estimation branch simultaneously. This enables the weakly-supervised segmentation by taking advantage of features that are unsupervisedly learned in the motion estimation branch from a large amount of unannotated data. Experimental results using cardiac MlRI images from 220 subjects show that the joint learning of both tasks is complementary and the proposed models outperform the competing methods significantly in terms of accuracy and speed.

Journal article

Arslan S, Ktena SI, Glocker B, Rueckert Det al., 2018, Graph saliency maps through spectral convolutional networks: application to sex classification with brain connectivity, International Workshop on Graphs in Biomedical Image Analysis, Publisher: Springer Verlag, ISSN: 0302-9743

Graph convolutional networks (GCNs) allow to apply traditional convolution operations in non-Euclidean domains, where data are commonly modelled as irregular graphs. Medical imaging and, in particular, neuroscience studies often rely on such graph representations, with brain connectivity networks being a characteristic example, while ultimately seeking the locus of phenotypic or disease-related differences in the brain. These regions of interest (ROIs) are, then, considered to be closely associated with function and/or behaviour. Driven by this, we explore GCNs for the task of ROI identification and propose a visual attribution method based on class activation mapping. By undertaking a sex classification task as proof of concept, we show that this method can be used to identify salient nodes (brain regions) without prior node labels. Based on experiments conducted on neuroimaging data of more than 5000 participants from UK Biobank, we demonstrate the robustness of the proposed method in highlighting reproducible regions across individuals. We further evaluate the neurobiological relevance of the identified regions based on evidence from large-scale UK Biobank studies.

Conference paper

Schlemper J, Castro DC, Bai W, Qin C, Oktay O, Duan J, Price AN, Hajnal J, Rueckert Det al., 2018, Bayesian deep learning for accelerated MR image reconstruction, International Workshop on Machine Learning for Medical Image Reconstruction, Publisher: Springer, Cham, Pages: 64-71, ISSN: 0302-9743

Recently, many deep learning (DL) based MR image reconstruction methods have been proposed with promising results. However, only a handful of work has been focussing on characterising the behaviour of deep networks, such as investigating when the networks may fail to reconstruct. In this work, we explore the applicability of Bayesian DL techniques to model the uncertainty associated with DL-based reconstructions. In particular, we apply MC-dropout and heteroscedastic loss to the reconstruction networks to model epistemic and aleatoric uncertainty. We show that the proposed Bayesian methods achieve competitive performance when the test images are relatively far from the training data distribution and outperforms when the baseline method is over-parametrised. In addition, we qualitatively show that there seems to be a correlation between the magnitude of the produced uncertainty maps and the error maps, demonstrating the potential utility of the Bayesian DL methods for assessing the reliability of the reconstructed images.

Conference paper

Schlemper J, Oktay O, Bai W, Castro DC, Duan J, Qin C, Hajnal JV, Rueckert Det al., 2018, Cardiac MR segmentation from undersampled k-space using deep latent representation learning, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer, Cham, Pages: 259-267, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. Reconstructing magnetic resonance imaging (MRI) from undersampled k-space enables the accelerated acquisition of MRI but is a challenging problem. However, in many diagnostic scenarios, perfect reconstructions are not necessary as long as the images allow clinical practitioners to extract clinically relevant parameters. In this work, we present a novel deep learning framework for reconstructing such clinical parameters directly from undersampled data, expanding on the idea of application-driven MRI. We propose two deep architectures, an end-to-end synthesis network and a latent feature interpolation network, to predict cardiac segmentation maps from extremely undersampled dynamic MRI data, bypassing the usual image reconstruction stage altogether. We perform a large-scale simulation study using UK Biobank data containing nearly 1000 test subjects and show that with the proposed approaches, an accurate estimate of clinical parameters such as ejection fraction can be obtained from fewer than 10 k-space lines per time-frame.

Conference paper

Biffi C, Oktay O, Tarroni G, Bai W, De Marvao A, Doumou G, Rajchl M, Bedair R, Prasad S, Cook S, O’Regan D, Rueckert Det al., 2018, Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer, Pages: 464-471, ISSN: 0302-9743

Alterations in the geometry and function of the heart define well-established causes of cardiovascular disease. However, current approaches to the diagnosis of cardiovascular diseases often rely on subjective human assessment as well as manual analysis of medical images. Both factors limit the sensitivity in quantifying complex structural and functional phenotypes. Deep learning approaches have recently achieved success for tasks such as classification or segmentation of medical images, but lack interpretability in the feature extraction and decision processes, limiting their value in clinical diagnosis. In this work, we propose a 3D convolutional generative model for automatic classification of images from patients with cardiac diseases associated with structural remodeling. The model leverages interpretable task-specific anatomic patterns learned from 3D segmentations. It further allows to visualise and quantify the learned pathology-specific remodeling patterns in the original input space of the images. This approach yields high accuracy in the categorization of healthy and hypertrophic cardiomyopathy subjects when tested on unseen MR images from our own multi-centre dataset (100%) as well on the ACDC MICCAI 2017 dataset (90%). We believe that the proposed deep learning approach is a promising step towards the development of interpretable classifiers for the medical imaging domain, which may help clinicians to improve diagnostic accuracy and enhance patient risk-stratification.

Conference paper

Bai W, Suzuki H, Qin C, Tarroni G, Oktay O, Matthews PM, Rueckert Det al., 2018, Recurrent neural networks for aortic image sequence segmentation with sparse annotations, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer Nature Switzerland AG, Pages: 586-594, ISSN: 0302-9743

Segmentation of image sequences is an important task in medical image analysis, which enables clinicians to assess the anatomy and function of moving organs. However, direct application of a segmentation algorithm to each time frame of a sequence may ignore the temporal continuity inherent in the sequence. In this work, we propose an image sequence segmentation algorithm by combining a fully convolutional network with a recurrent neural network, which incorporates both spatial and temporal information into the segmentation task. A key challenge in training this network is that the available manual annotations are temporally sparse, which forbids end-to-end training. We address this challenge by performing non-rigid label propagation on the annotations and introducing an exponentially weighted loss function for training. Experiments on aortic MR image sequences demonstrate that the proposed method significantly improves both accuracy and temporal smoothness of segmentation, compared to a baseline method that utilises spatial information only. It achieves an average Dice metric of 0.960 for the ascending aorta and 0.953 for the descending aorta.

Conference paper

Meng Q, Baumgartner C, Sinclair M, Housden J, Rajchl M, Gomez A, Hou B, Toussaint N, Zimmer V, Tan J, Matthew J, Rueckert D, Schnabel J, Kainz Bet al., 2018, Automatic shadow detection in 2D ultrasound images, International Workshop on Preterm, Perinatal and Paediatric Image Analysis, Pages: 66-75, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fetal screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.

Conference paper

Valindria V, Lavdas I, Cerrolaza J, Aboagye EO, Rockall A, Rueckert D, Glocker Bet al., 2018, Small organ segmentation in whole-body MRI using a two-stage FCN and weighting schemes, International Workshop on Machine Learning in Medical Imaging (MLMI) 2018, Publisher: Springer Verlag, Pages: 346-354, ISSN: 0302-9743

Accurate and robust segmentation of small organs in whole-body MRI is difficult due to anatomical variation and class imbalance. Recent deep network based approaches have demonstrated promising performance on abdominal multi-organ segmentations. However, the performance on small organs is still suboptimal as these occupy only small regions of the whole-body volumes with unclear boundaries and variable shapes. A coarse-to-fine, hierarchical strategy is a common approach to alleviate this problem, however, this might miss useful contextual information. We propose a two-stage approach with weighting schemes based on auto-context and spatial atlas priors. Our experiments show that the proposed approach can boost the segmentation accuracy of multiple small organs in whole-body MRI scans.

Conference paper

Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Zemrak F, Fung K, Paiva JM, Carapella V, Kim YJ, Suzuki H, Kainz B, Matthews PM, Petersen SE, Piechnik SK, Neubauer S, Glocker B, Rueckert Det al., 2018, Automated cardiovascular magnetic resonance image analysis with fully convolutional networks., Journal of Cardiovascular Magnetic Resonance, Vol: 20, ISSN: 1097-6647

Background: Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR imageanalysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images.Methods: Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV)end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV).Results: By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The meanabsolute difference between automated measurement and manual measurement was 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-ax

Journal article

Robinson R, Oktay O, Bai W, Valindria V, Sanghvi MM, Aung N, Paiva JM, Zemrak F, Fung K, Lukaschuk E, Lee AM, Carapella V, Kim YJ, Kainz B, Piechnik SK, Neubauer S, Petersen SE, Page C, Rueckert D, Glocker Bet al., 2018, Real-time prediction of segmentation quality, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, ISSN: 0302-9743

Recent advances in deep learning based image segmentationmethods have enabled real-time performance with human-level accuracy.However, occasionally even the best method fails due to low image qual-ity, artifacts or unexpected behaviour of black box algorithms. Beingable to predict segmentation quality in the absence of ground truth is ofparamount importance in clinical practice, but also in large-scale studiesto avoid the inclusion of invalid data in subsequent analysis.In this work, we propose two approaches of real-time automated qualitycontrol for cardiovascular MR segmentations using deep learning. First,we train a neural network on 12,880 samples to predict Dice SimilarityCoefficients (DSC) on a per-case basis. We report a mean average error(MAE) of 0.03 on 1,610 test samples and 97% binary classification accu-racy for separating low and high quality segmentations. Secondly, in thescenario where no manually annotated data is available, we train a net-work to predict DSC scores from estimated quality obtained via a reversetesting strategy. We report an MAE = 0.14 and 91% binary classifica-tion accuracy for this case. Predictions are obtained in real-time which,when combined with real-time segmentation methods, enables instantfeedback on whether an acquired scan is analysable while the patient isstill in the scanner. This further enables new applications of optimisingimage acquisition towards best possible analysis results.

Conference paper

Bruun M, Rhodius-Meester HFM, Koikkalainen J, Baroni M, Gjerum L, Lemstra AW, Barkhof F, Remes AM, Urhemaa T, Tolonen A, Rueckert D, van Gils M, Frederiksen KS, Waldemar G, Scheltens P, Mecocci P, Soininen H, Lötjönen J, Hasselbalch SG, van der Flier WMet al., 2018, Evaluating combinations of diagnostic tests to discriminate different dementia types, Alzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring, Vol: 10, Pages: 509-518, ISSN: 2352-8729

Introduction: We studied, using a data-driven approach, how different combinations of diagnostic tests contribute to the differential diagnosis of dementia. Methods: In this multicenter study, we included 356 patients with Alzheimer's disease, 87 frontotemporal dementia, 61 dementia with Lewy bodies, 38 vascular dementia, and 302 controls. We used a classifier to assess accuracy for individual performance and combinations of cognitive tests, cerebrospinal fluid biomarkers, and automated magnetic resonance imaging features for pairwise differentiation between dementia types. Results: Cognitive tests had good performance in separating any type of dementia from controls. Cerebrospinal fluid optimally contributed to identifying Alzheimer's disease, whereas magnetic resonance imaging features aided in separating vascular dementia, dementia with Lewy bodies, and frontotemporal dementia. Combining diagnostic tests increased the accuracy, with balanced accuracies ranging from 78% to 97%. Discussion: Different diagnostic tests have their distinct roles in differential diagnostics of dementias. Our results indicate that combining different diagnostic tests may increase the accuracy further.

Journal article

Chen L, Carlton Jones AL, Mair G, Patel R, Gontsarova A, Ganesalingam J, Math N, Dawson A, Basaam A, Cohen D, Mehta A, Wardlaw J, Rueckert D, Bentley Pet al., 2018, Rapid automated quantification of cerebral leukoaraiosis on CT: a multicentre validation study, Radiology, Vol: 288, Pages: 573-581, ISSN: 0033-8419

Purpose - To validate a fully-automated, machine-learning method (random forest) for segmenting cerebral white matter lesions (WML) on computerized tomography (CT). Materials and Methods – A retrospective sample of 1082 acute ischemic stroke cases was obtained, comprising unselected patients: 1) treated with thrombolysis; or 2) undergoing contemporaneous MR imaging and CT; and 3) a subset of IST-3 trial participants. Automated (‘Auto’) WML images were validated relative to experts’ manual tracings on CT, and co-registered FLAIR-MRI; and ratings using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between Auto and expert ratings.Results - Auto WML volumes correlated strongly with expert-delineated WML volumes on MR imaging and on CT (r2=0.85, 0.71 respectively; p<0.001). Spatial-similarity of Auto-maps, relative to MRI-WML, was not significantly different to that of expert CT-WML tracings. Individual expert CT-WML volumes correlated well with each other (r2=0.85), but varied widely (range: 91% of mean estimate; median 11 cc; range: 0.2 – 68 cc). Agreements between Auto and consensus-expert ratings were superior or similar to agreements between individual pairs of experts (kappa: 0.60, 0.64 vs. 0.51, 0.67 for two score systems; p<0.01 for first comparison). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (p>0.05). Auto preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total Auto processing time averaged 109s (range: 79 - 140 s). Conclusion - An automated method for quantifying CT cerebral white matter lesions achieves a similar accuracy to experts in unselected and multicenter cohorts.

Journal article

Hou B, Khanal B, Alansary A, McDonagh S, Davidson A, Rutherford M, Hajnal J, Rueckert D, Glocker B, Kainz Bet al., 2018, 3D reconstruction in canonical co-ordinate space from arbitrarily oriented 2D images, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 1737-1750, ISSN: 0278-0062

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2D/3D image registration methods, can significantly degrade the performance of 3D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion such as fetal in-utero imaging, complicate the 3D image and volume reconstruction process. In this paper we present a learning based image registration method capable of predicting 3D rigid transformations of arbitrarily oriented 2D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations we utilize a Convolutional Neural Network (CNN) architecture to learn the regression function capable of mapping 2D image slices to a 3D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2D/3D registration initialization problem and is suitable for real-time scenarios.

Journal article

Bhuva A, Treibel TA, De Marvao A, Biffi C, Dawes T, Doumou G, Bai W, Oktay O, Jones S, Davies R, Chaturvedi N, Rueckert D, Hughes A, Moon JC, Manisty CHet al., 2018, Septal hypertrophy in aortic stenosis and its regression after valve replacement is more plastic in males than females: insights from 3D machine learning approach, European-Society-of-Cardiology Congress, Publisher: OXFORD UNIV PRESS, Pages: 1132-1132, ISSN: 0195-668X

Conference paper

Parisot S, Ktena SI, Ferrante E, Lee M, Guerrero R, Glocker B, Rueckert Det al., 2018, Disease prediction using graph convolutional networks: application to Autism Spectrum Disorder and Alzheimer's disease, Medical Image Analysis, Vol: 48, Pages: 117-130, ISSN: 1361-8415

Graphs are widely used as a natural framework that captures interactionsbetween individual elements represented as nodes in a graph. In medicalapplications, specifically, nodes can represent individuals within apotentially large population (patients or healthy controls) accompanied by aset of features, while the graph edges incorporate associations betweensubjects in an intuitive manner. This representation allows to incorporate thewealth of imaging and non-imaging information as well as individual subjectfeatures simultaneously in disease classification tasks. Previous graph-basedapproaches for supervised or unsupervised learning in the context of diseaseprediction solely focus on pairwise similarities between subjects, disregardingindividual characteristics and features, or rather rely on subject-specificimaging feature vectors and fail to model interactions between them. In thispaper, we present a thorough evaluation of a generic framework that leveragesboth imaging and non-imaging information and can be used for brain analysis inlarge populations. This framework exploits Graph Convolutional Networks (GCNs)and involves representing populations as a sparse graph, where its nodes areassociated with imaging-based feature vectors, while phenotypic information isintegrated as edge weights. The extensive evaluation explores the effect ofeach individual component of this framework on disease prediction performanceand further compares it to different baselines. The framework performance istested on two large datasets with diverse underlying data, ABIDE and ADNI, forthe prediction of Autism Spectrum Disorder and conversion to Alzheimer'sdisease, respectively. Our analysis shows that our novel framework can improveover state-of-the-art results on both databases, with 70.4% classificationaccuracy for ABIDE and 80.0% for ADNI.

Journal article

Ledig C, Schuh A, Guerrero, Heckemann RA, Rueckert Det al., 2018, Structural brain imaging in Alzheimer’s disease and mild cognitive impairment: biomarker analysis and shared morphometry database, Scientific Reports, Vol: 8, ISSN: 2045-2322

Magnetic resonance (MR) imaging is a powerful technique for non-invasive in-vivo imaging of the human brain. We employed a recently validated method for robust cross-sectional and longitudinal segmentation of MR brain images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. Specifically, we segmented 5074 MR brain images into 138 anatomical regions and extracted time-point specific structural volumes and volume change during follow-up intervals of 12 or 24 months. We assessed the extracted biomarkers by determining their power to predict diagnostic classification and by comparing atrophy rates to published meta-studies. The approach enables comprehensive analysis of structural changes within the whole brain. The discriminative power of individual biomarkers (volumes/atrophy rates) is on par with results published by other groups. We publish all quality-checked brain masks, structural segmentations, and extracted biomarkers along with this article. We further share the methodology for brain extraction (pincram) and segmentation (MALPEM, MALPEM4D) as open source projects with the community. The identified biomarkers hold great potential for deeper analysis, and the validated methodology can readily be applied to other imaging cohorts.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00172041&limit=30&person=true&page=2&respub-action=search.html