# ProfessorDanielRueckert

Faculty of EngineeringDepartment of Computing

//

### Contact

+44 (0)20 7594 8333d.rueckert

//

### Location

568Huxley BuildingSouth Kensington Campus

//

## Publications

Publication Type
Year
to

740 results found

Schlemper J, Oktay O, Bai W, Castro DC, Duan J, Qin C, Hajnal JV, Rueckert Det al., 2018, Cardiac MR segmentation from undersampled k-space using deep latent representation learning, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer, Cham, Pages: 259-267, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. Reconstructing magnetic resonance imaging (MRI) from undersampled k-space enables the accelerated acquisition of MRI but is a challenging problem. However, in many diagnostic scenarios, perfect reconstructions are not necessary as long as the images allow clinical practitioners to extract clinically relevant parameters. In this work, we present a novel deep learning framework for reconstructing such clinical parameters directly from undersampled data, expanding on the idea of application-driven MRI. We propose two deep architectures, an end-to-end synthesis network and a latent feature interpolation network, to predict cardiac segmentation maps from extremely undersampled dynamic MRI data, bypassing the usual image reconstruction stage altogether. We perform a large-scale simulation study using UK Biobank data containing nearly 1000 test subjects and show that with the proposed approaches, an accurate estimate of clinical parameters such as ejection fraction can be obtained from fewer than 10 k-space lines per time-frame.

Conference paper

Qin C, Bai W, Schlemper J, Petersen SE, Piechnik SK, Neubauer S, Rueckert Det al., 2018, Joint learning of motion estimation and segmentation for cardiac MR image sequences, International Conference on Medical Image Computing and Computer-Assisted Intervention, Publisher: Springer Verlag, Pages: 472-480, ISSN: 0302-9743

Cardiac motion estimation and segmentation play important roles in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In this paper, we propose a novel deep learning method for joint estimation of motion and segmentation from cardiac MR image sequences. The proposed network consists of two branches: a cardiac motion estimation branch which is built on a novel unsupervised Siamese style recurrent spatial transformer network, and a cardiac segmentation branch that is based on a fully convolutional network. In particular, a joint multi-scale feature encoder is learned by optimizing the segmentation branch and the motion estimation branch simultaneously. This enables the weakly-supervised segmentation by taking advantage of features that are unsupervisedly learned in the motion estimation branch from a large amount of unannotated data. Experimental results using cardiac MlRI images from 220 subjects show that the joint learning of both tasks is complementary and the proposed models outperform the competing methods significantly in terms of accuracy and speed.

Conference paper

Schlemper J, Castro DC, Bai W, Qin C, Oktay O, Duan J, Price AN, Hajnal J, Rueckert Det al., 2018, Bayesian deep learning for accelerated MR image reconstruction, International Workshop on Machine Learning for Medical Image Reconstruction, Publisher: Springer, Cham, Pages: 64-71, ISSN: 0302-9743

Recently, many deep learning (DL) based MR image reconstruction methods have been proposed with promising results. However, only a handful of work has been focussing on characterising the behaviour of deep networks, such as investigating when the networks may fail to reconstruct. In this work, we explore the applicability of Bayesian DL techniques to model the uncertainty associated with DL-based reconstructions. In particular, we apply MC-dropout and heteroscedastic loss to the reconstruction networks to model epistemic and aleatoric uncertainty. We show that the proposed Bayesian methods achieve competitive performance when the test images are relatively far from the training data distribution and outperforms when the baseline method is over-parametrised. In addition, we qualitatively show that there seems to be a correlation between the magnitude of the produced uncertainty maps and the error maps, demonstrating the potential utility of the Bayesian DL methods for assessing the reliability of the reconstructed images.

Conference paper

Biffi C, Oktay O, Tarroni G, Bai W, De Marvao A, Doumou G, Rajchl M, Bedair R, Prasad S, Cook S, O’Regan D, Rueckert Det al., 2018, Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer, Pages: 464-471, ISSN: 0302-9743

Alterations in the geometry and function of the heart define well-established causes of cardiovascular disease. However, current approaches to the diagnosis of cardiovascular diseases often rely on subjective human assessment as well as manual analysis of medical images. Both factors limit the sensitivity in quantifying complex structural and functional phenotypes. Deep learning approaches have recently achieved success for tasks such as classification or segmentation of medical images, but lack interpretability in the feature extraction and decision processes, limiting their value in clinical diagnosis. In this work, we propose a 3D convolutional generative model for automatic classification of images from patients with cardiac diseases associated with structural remodeling. The model leverages interpretable task-specific anatomic patterns learned from 3D segmentations. It further allows to visualise and quantify the learned pathology-specific remodeling patterns in the original input space of the images. This approach yields high accuracy in the categorization of healthy and hypertrophic cardiomyopathy subjects when tested on unseen MR images from our own multi-centre dataset (100%) as well on the ACDC MICCAI 2017 dataset (90%). We believe that the proposed deep learning approach is a promising step towards the development of interpretable classifiers for the medical imaging domain, which may help clinicians to improve diagnostic accuracy and enhance patient risk-stratification.

Conference paper

Bai W, Suzuki H, Qin C, Tarroni G, Oktay O, Matthews PM, Rueckert Det al., 2018, Recurrent neural networks for aortic image sequence segmentation with sparse annotations, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer Nature Switzerland AG, Pages: 586-594, ISSN: 0302-9743

Segmentation of image sequences is an important task in medical image analysis, which enables clinicians to assess the anatomy and function of moving organs. However, direct application of a segmentation algorithm to each time frame of a sequence may ignore the temporal continuity inherent in the sequence. In this work, we propose an image sequence segmentation algorithm by combining a fully convolutional network with a recurrent neural network, which incorporates both spatial and temporal information into the segmentation task. A key challenge in training this network is that the available manual annotations are temporally sparse, which forbids end-to-end training. We address this challenge by performing non-rigid label propagation on the annotations and introducing an exponentially weighted loss function for training. Experiments on aortic MR image sequences demonstrate that the proposed method significantly improves both accuracy and temporal smoothness of segmentation, compared to a baseline method that utilises spatial information only. It achieves an average Dice metric of 0.960 for the ascending aorta and 0.953 for the descending aorta.

Conference paper

Qin C, Bai W, Schlemper J, Petersen SE, Piechnik SK, Neubauer S, Rueckert Det al., 2018, Joint motion estimation and segmentation from undersampled cardiac MR image, Machine Learning for Medical Image Reconstruction Workshop, Pages: 55-63, ISSN: 0302-9743

© 2018, Springer Nature Switzerland AG. Accelerating the acquisition of magnetic resonance imaging (MRI) is a challenging problem, and many works have been proposed to reconstruct images from undersampled k-space data. However, if the main purpose is to extract certain quantitative measures from the images, perfect reconstructions may not always be necessary as long as the images enable the means of extracting the clinically relevant measures. In this paper, we work on jointly predicting cardiac motion estimation and segmentation directly from undersampled data, which are two important steps in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In particular, a unified model consisting of both motion estimation branch and segmentation branch is learned by optimising the two tasks simultaneously. Additional corresponding fully-sampled images are incorporated into the network as a parallel sub-network to enhance and guide the learning during the training process. Experimental results using cardiac MR images from 220 subjects show that the proposed model is robust to undersampled data and is capable of predicting results that are close to that from fully-sampled ones, while bypassing the usual image reconstruction stage.

Conference paper

Meng Q, Baumgartner C, Sinclair M, Housden J, Rajchl M, Gomez A, Hou B, Toussaint N, Zimmer V, Tan J, Matthew J, Rueckert D, Schnabel J, Kainz Bet al., 2018, Automatic shadow detection in 2D ultrasound images, International Workshop on Preterm, Perinatal and Paediatric Image Analysis, Pages: 66-75, ISSN: 0302-9743

Conference paper

Valindria V, Lavdas I, Cerrolaza J, Aboagye EO, Rockall A, Rueckert D, Glocker Bet al., 2018, Small organ segmentation in whole-body MRI using a two-stage FCN and weighting schemes, International Workshop on Machine Learning in Medical Imaging (MLMI) 2018, Publisher: Springer Verlag, Pages: 346-354, ISSN: 0302-9743

Accurate and robust segmentation of small organs in whole-body MRI is difficult due to anatomical variation and class imbalance. Recent deep network based approaches have demonstrated promising performance on abdominal multi-organ segmentations. However, the performance on small organs is still suboptimal as these occupy only small regions of the whole-body volumes with unclear boundaries and variable shapes. A coarse-to-fine, hierarchical strategy is a common approach to alleviate this problem, however, this might miss useful contextual information. We propose a two-stage approach with weighting schemes based on auto-context and spatial atlas priors. Our experiments show that the proposed approach can boost the segmentation accuracy of multiple small organs in whole-body MRI scans.

Conference paper

Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Zemrak F, Fung K, Paiva JM, Carapella V, Kim YJ, Suzuki H, Kainz B, Matthews PM, Petersen SE, Piechnik SK, Neubauer S, Glocker B, Rueckert Det al., 2018, Automated cardiovascular magnetic resonance image analysis with fully convolutional networks., Journal of Cardiovascular Magnetic Resonance, Vol: 20, ISSN: 1097-6647

Background: Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR imageanalysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images.Methods: Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV)end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV).Results: By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The meanabsolute difference between automated measurement and manual measurement was 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-ax

Journal article

Robinson R, Oktay O, Bai W, Valindria V, Sanghvi MM, Aung N, Paiva JM, Zemrak F, Fung K, Lukaschuk E, Lee AM, Carapella V, Kim YJ, Kainz B, Piechnik SK, Neubauer S, Petersen SE, Page C, Rueckert D, Glocker Bet al., 2018, Real-time prediction of segmentation quality, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 578-585, ISSN: 0302-9743

Recent advances in deep learning based image segmentationmethods have enabled real-time performance with human-level accuracy.However, occasionally even the best method fails due to low image qual-ity, artifacts or unexpected behaviour of black box algorithms. Beingable to predict segmentation quality in the absence of ground truth is ofparamount importance in clinical practice, but also in large-scale studiesto avoid the inclusion of invalid data in subsequent analysis.In this work, we propose two approaches of real-time automated qualitycontrol for cardiovascular MR segmentations using deep learning. First,we train a neural network on 12,880 samples to predict Dice SimilarityCoefficients (DSC) on a per-case basis. We report a mean average error(MAE) of 0.03 on 1,610 test samples and 97% binary classification accu-racy for separating low and high quality segmentations. Secondly, in thescenario where no manually annotated data is available, we train a net-work to predict DSC scores from estimated quality obtained via a reversetesting strategy. We report an MAE = 0.14 and 91% binary classifica-tion accuracy for this case. Predictions are obtained in real-time which,when combined with real-time segmentation methods, enables instantfeedback on whether an acquired scan is analysable while the patient isstill in the scanner. This further enables new applications of optimisingimage acquisition towards best possible analysis results.

Conference paper

Duan J, Schlemper J, Bai W, Dawes TJW, Bello G, Doumou G, De Marvao A, O'Regan DP, Rueckert Det al., 2018, Deep Nested Level Sets: Fully Automated Segmentation of Cardiac MR Images in Patients with Pulmonary Hypertension, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Pages: 595-603, ISSN: 0302-9743

Conference paper

Bruun M, Rhodius-Meester HFM, Koikkalainen J, Baroni M, Gjerum L, Lemstra AW, Barkhof F, Remes AM, Urhemaa T, Tolonen A, Rueckert D, van Gils M, Frederiksen KS, Waldemar G, Scheltens P, Mecocci P, Soininen H, Lötjönen J, Hasselbalch SG, van der Flier WMet al., 2018, Evaluating combinations of diagnostic tests to discriminate different dementia types, Alzheimer's and Dementia: Diagnosis, Assessment and Disease Monitoring, Vol: 10, Pages: 509-518, ISSN: 2352-8729

Introduction: We studied, using a data-driven approach, how different combinations of diagnostic tests contribute to the differential diagnosis of dementia. Methods: In this multicenter study, we included 356 patients with Alzheimer's disease, 87 frontotemporal dementia, 61 dementia with Lewy bodies, 38 vascular dementia, and 302 controls. We used a classifier to assess accuracy for individual performance and combinations of cognitive tests, cerebrospinal fluid biomarkers, and automated magnetic resonance imaging features for pairwise differentiation between dementia types. Results: Cognitive tests had good performance in separating any type of dementia from controls. Cerebrospinal fluid optimally contributed to identifying Alzheimer's disease, whereas magnetic resonance imaging features aided in separating vascular dementia, dementia with Lewy bodies, and frontotemporal dementia. Combining diagnostic tests increased the accuracy, with balanced accuracies ranging from 78% to 97%. Discussion: Different diagnostic tests have their distinct roles in differential diagnostics of dementias. Our results indicate that combining different diagnostic tests may increase the accuracy further.

Journal article

Bhuva A, Treibel TA, De Marvao A, Biffi C, Dawes T, Doumou G, Bai W, Oktay O, Jones S, Davies R, Chaturvedi N, Rueckert D, Hughes A, Moon JC, Manisty CHet al., 2018, Septal hypertrophy in aortic stenosis and its regression after valve replacement is more plastic in males than females: insights from 3D machine learning approach, European-Society-of-Cardiology Congress, Publisher: OXFORD UNIV PRESS, Pages: 1132-1132, ISSN: 0195-668X

Conference paper

Chen L, Carlton Jones AL, Mair G, Patel R, Gontsarova A, Ganesalingam J, Math N, Dawson A, Basaam A, Cohen D, Mehta A, Wardlaw J, Rueckert D, Bentley Pet al., 2018, Rapid automated quantification of cerebral leukoaraiosis on CT: a multicentre validation study, Radiology, Vol: 288, Pages: 573-581, ISSN: 0033-8419

Purpose - To validate a fully-automated, machine-learning method (random forest) for segmenting cerebral white matter lesions (WML) on computerized tomography (CT). Materials and Methods – A retrospective sample of 1082 acute ischemic stroke cases was obtained, comprising unselected patients: 1) treated with thrombolysis; or 2) undergoing contemporaneous MR imaging and CT; and 3) a subset of IST-3 trial participants. Automated (‘Auto’) WML images were validated relative to experts’ manual tracings on CT, and co-registered FLAIR-MRI; and ratings using two conventional ordinal scales. Analyses included correlations between CT and MR imaging volumes, and agreements between Auto and expert ratings.Results - Auto WML volumes correlated strongly with expert-delineated WML volumes on MR imaging and on CT (r2=0.85, 0.71 respectively; p<0.001). Spatial-similarity of Auto-maps, relative to MRI-WML, was not significantly different to that of expert CT-WML tracings. Individual expert CT-WML volumes correlated well with each other (r2=0.85), but varied widely (range: 91% of mean estimate; median 11 cc; range: 0.2 – 68 cc). Agreements between Auto and consensus-expert ratings were superior or similar to agreements between individual pairs of experts (kappa: 0.60, 0.64 vs. 0.51, 0.67 for two score systems; p<0.01 for first comparison). Accuracy was unaffected by established infarction, acute ischemic changes, or atrophy (p>0.05). Auto preprocessing failure rate was 4%; rating errors occurred in a further 4%. Total Auto processing time averaged 109s (range: 79 - 140 s). Conclusion - An automated method for quantifying CT cerebral white matter lesions achieves a similar accuracy to experts in unselected and multicenter cohorts.

Journal article

Hou B, Khanal B, Alansary A, McDonagh S, Davidson A, Rutherford M, Hajnal J, Rueckert D, Glocker B, Kainz Bet al., 2018, 3D reconstruction in canonical co-ordinate space from arbitrarily oriented 2D images, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 1737-1750, ISSN: 0278-0062

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2D/3D image registration methods, can significantly degrade the performance of 3D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion such as fetal in-utero imaging, complicate the 3D image and volume reconstruction process. In this paper we present a learning based image registration method capable of predicting 3D rigid transformations of arbitrarily oriented 2D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations we utilize a Convolutional Neural Network (CNN) architecture to learn the regression function capable of mapping 2D image slices to a 3D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2D/3D registration initialization problem and is suitable for real-time scenarios.

Journal article

Parisot S, Ktena SI, Ferrante E, Lee M, Guerrero R, Glocker B, Rueckert Det al., 2018, Disease prediction using graph convolutional networks: application to Autism Spectrum Disorder and Alzheimer's disease, Medical Image Analysis, Vol: 48, Pages: 117-130, ISSN: 1361-8415

Graphs are widely used as a natural framework that captures interactionsbetween individual elements represented as nodes in a graph. In medicalapplications, specifically, nodes can represent individuals within apotentially large population (patients or healthy controls) accompanied by aset of features, while the graph edges incorporate associations betweensubjects in an intuitive manner. This representation allows to incorporate thewealth of imaging and non-imaging information as well as individual subjectfeatures simultaneously in disease classification tasks. Previous graph-basedapproaches for supervised or unsupervised learning in the context of diseaseprediction solely focus on pairwise similarities between subjects, disregardingindividual characteristics and features, or rather rely on subject-specificimaging feature vectors and fail to model interactions between them. In thispaper, we present a thorough evaluation of a generic framework that leveragesboth imaging and non-imaging information and can be used for brain analysis inlarge populations. This framework exploits Graph Convolutional Networks (GCNs)and involves representing populations as a sparse graph, where its nodes areassociated with imaging-based feature vectors, while phenotypic information isintegrated as edge weights. The extensive evaluation explores the effect ofeach individual component of this framework on disease prediction performanceand further compares it to different baselines. The framework performance istested on two large datasets with diverse underlying data, ABIDE and ADNI, forthe prediction of Autism Spectrum Disorder and conversion to Alzheimer'sdisease, respectively. Our analysis shows that our novel framework can improveover state-of-the-art results on both databases, with 70.4% classificationaccuracy for ABIDE and 80.0% for ADNI.

Journal article

Ledig C, Schuh A, Guerrero, Heckemann RA, Rueckert Det al., 2018, Structural brain imaging in Alzheimer’s disease and mild cognitive impairment: biomarker analysis and shared morphometry database, Scientific Reports, Vol: 8, ISSN: 2045-2322

Magnetic resonance (MR) imaging is a powerful technique for non-invasive in-vivo imaging of the human brain. We employed a recently validated method for robust cross-sectional and longitudinal segmentation of MR brain images from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) cohort. Specifically, we segmented 5074 MR brain images into 138 anatomical regions and extracted time-point specific structural volumes and volume change during follow-up intervals of 12 or 24 months. We assessed the extracted biomarkers by determining their power to predict diagnostic classification and by comparing atrophy rates to published meta-studies. The approach enables comprehensive analysis of structural changes within the whole brain. The discriminative power of individual biomarkers (volumes/atrophy rates) is on par with results published by other groups. We publish all quality-checked brain masks, structural segmentations, and extracted biomarkers along with this article. We further share the methodology for brain extraction (pincram) and segmentation (MALPEM, MALPEM4D) as open source projects with the community. The identified biomarkers hold great potential for deeper analysis, and the validated methodology can readily be applied to other imaging cohorts.

Journal article

Cerrolaza JJ, Li Y, Biffi C, Gomez A, Matthew J, Sinclair M, Gupta C, Knight CL, Rueckert Det al., 2018, Fetal Skull Reconstruction via Deep Convolutional Autoencoders., Conf Proc IEEE Eng Med Biol Soc, Vol: 2018, Pages: 887-890, ISSN: 1557-170X

Ultrasound (US) imaging is arguably the most commonly used modality for fetal screening. Recently, 3DUS has been progressively adopted in modern obstetric practice, showing promising diagnosis capabilities, and alleviating many of the inherent limitations of traditional 2DUS, such as subjectivity and operator dependence. However, the involuntary movements of the fetus, and the difficulty for the operator to inspect the entire volume in real-time can hinder the acquisition of the entire region of interest. In this paper, we present two deep convolutional architectures for the reconstruction of the fetal skull in partially occluded 3DUS volumes: a TL deep convolutional network (TL-Net), and a conditional variational autoencoder (CVAE). The performance of the two networks was evaluated for occlusion rates up to 50%, both showing accurate results even when only 60% of the skull is included in the US volume (Dice coeff. $0.84\pm 0.04$ for CVAE and $0.83\pm 0.03$ for TL-Net). The reconstruction networks proposed here have the potential to optimize image acquisition protocols in obstetric sonography, reducing the acquisition time and providing comprehensive anatomical information even from partially occluded images.

Journal article

Kamnitsas K, Castro DC, Folgoc LL, Walker I, Tanno R, Rueckert D, Glocker B, Criminisi A, Nori AVet al., 2018, Semi-Supervised Learning via Compact Latent Space Clustering., International Conference on Machine Learning, Publisher: PMLR, Pages: 2464-2473

We present a novel loss function for semi- supervised learning of neural networks with a simple and effective regularization term based on compact clustering of the latent feature space. The key idea is to dynamically create a graph over both labeled and unlabeled training samples using Label Propagation (LP) to capture the underlying structure in the feature space and model its high and low density areas. The regularization attracts similar samples to form compact clusters and repulses dissimilar ones without applying strong forces to unconfident samples. Label confidence is directly obtained via LP in contrast to using predictions from an imperfect classifier as in previous work. We evaluate our approach on three benchmarks and compare to state-of-the art with promising results. Our method can be easily applied to any existing network architecture enabling an effective use of unlabeled data for a wide range of applications.

Conference paper

Valindria VV, Lavdas I, Bai W, Kamnitsas K, Aboagye EO, Rockall AG, Rueckert D, Glocker Bet al., 2018, Domain adaptation for MRI organ segmentation using reverse classification accuracy, International Conference on Medical Imaging with Deep Learning (MIDL)

The variations in multi-center data in medical imaging studies have broughtthe necessity of domain adaptation. Despite the advancement of machine learningin automatic segmentation, performance often degrades when algorithms areapplied on new data acquired from different scanners or sequences than thetraining data. Manual annotation is costly and time consuming if it has to becarried out for every new target domain. In this work, we investigate automaticselection of suitable subjects to be annotated for supervised domain adaptationusing the concept of reverse classification accuracy (RCA). RCA predicts theperformance of a trained model on data from the new domain and differentstrategies of selecting subjects to be included in the adaptation via transferlearning are evaluated. We perform experiments on a two-center MR database forthe task of organ segmentation. We show that subject selection via RCA canreduce the burden of annotation of new data for the target domain.

Conference paper

Rajchl M, Pawlowski N, Rueckert D, Matthews PM, Glocker Bet al., 2018, NeuroNet: fast and robust reproduction of multiple brain Image segmentation pipelines, International Conference on Medical Imaging with Deep Learning (MIDL), Publisher: MIDL

NeuroNet is a deep convolutional neural network mimicking multiple popularand state-of-the-art brain segmentation tools including FSL, SPM, and MALPEM.The network is trained on 5,000 T1-weighted brain MRI scans from the UK BiobankImaging Study that have been automatically segmented into brain tissue andcortical and sub-cortical structures using the standard neuroimaging pipelines.Training a single model from these complementary and partially overlappinglabel maps yields a new powerful "all-in-one", multi-output segmentation tool.The processing time for a single subject is reduced by an order of magnitudecompared to running each individual software package. We demonstrate very goodreproducibility of the original outputs while increasing robustness tovariations in the input data. We believe NeuroNet could be an important tool inlarge-scale population imaging studies and serve as a new standard inneuroscience by reducing the risk of introducing bias when choosing a specificsoftware package.

Conference paper

Dawes TJW, Serrani M, Bai W, Cai J, Suzuki H, de Marvao A, Quinlan M, Tokarczuk P, Ostrowski P, Matthews P, Rueckert D, Cook S, Costantino ML, O'Regan Det al., 2018, Myocardial trabeculae improve left ventricular function: a combined UK Biobank and computational analysis, GAT Annual Scientific Meeting 2018, Publisher: Association of Anaesthetists of Great Britain and Ireland

Conference paper

Koch LM, Rajchl M, Bai W, Baumgartner CF, Tong T, Passerat-Palmbach J, Aljabar P, Rueckert Det al., 2018, Multi-atlas segmentation using partially annotated data: methods and annotation strategies, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 1683-1696, ISSN: 0162-8828

Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

Journal article

Bruun M, Rhodius-Meester H, Baroni M, Gjerum L, Urhemaa T, Tolonen A, Rueckert D, van Gils M, Lemstra E, Barkhof F, Remes A, Frederiksen KS, Waldemar G, Scheltens P, Soininen H, Mecocci P, Koikkalainen J, Lotjonen J, Hasselbalch S, van der Flier Wet al., 2018, Biomarkers in differential diagnosis of dementia using a data-driven approach, 4th Congress of the European-Academy-of-Neurology (EAN), Publisher: WILEY, Pages: 278-278, ISSN: 1351-5101

Conference paper

Sinclair M, Baumgartner CF, Matthew J, Bai W, Martinez JC, Li Y, Smith S, Knight CL, Kainz B, Hajnal J, King AP, Rueckert Det al., 2018, Human-level Performance On Automatic Head Biometrics In Fetal Ultrasound Using Fully Convolutional Neural Networks, International Engineering in Medicine and Biology Conference

Measurement of head biometrics from fetal ultrasonography images is of keyimportance in monitoring the healthy development of fetuses. However, theaccurate measurement of relevant anatomical structures is subject to largeinter-observer variability in the clinic. To address this issue, an automatedmethod utilizing Fully Convolutional Networks (FCN) is proposed to determinemeasurements of fetal head circumference (HC) and biparietal diameter (BPD). AnFCN was trained on approximately 2000 2D ultrasound images of the head withannotations provided by 45 different sonographers during routine screeningexaminations to perform semantic segmentation of the head. An ellipse is fittedto the resulting segmentation contours to mimic the annotation typicallyproduced by a sonographer. The model's performance was compared withinter-observer variability, where two experts manually annotated 100 testimages. Mean absolute model-expert error was slightly better thaninter-observer error for HC (1.99mm vs 2.16mm), and comparable for BPD (0.61mmvs 0.59mm), as well as Dice coefficient (0.980 vs 0.980). Our resultsdemonstrate that the model performs at a level similar to a human expert, andlearns to produce accurate predictions from a large dataset annotated by manysonographers. Additionally, measurements are generated in near real-time at15fps on a GPU, which could speed up clinical workflow for both skilled andtrainee sonographers.

Conference paper

Cerrolaza JJ, Sinclair M, Li Y, Gomez A, Ferrante E, Matthew J, Gupta C, Knight CL, Rueckert Det al., 2018, Deep learning with ultrasound physics for fetal skull segmentation, 15th IEEE International Symposium on Biomedical Imaging (ISBI), Publisher: Institute of Electrical and Electronics Engineers, Pages: 564-567, ISSN: 1945-7928

2D ultrasound (US) is still the preferred imaging method for fetal screening. However, 2D biometrics are significantly affected by the inter/intra-observer variability and operator dependence of a traditionally manual procedure. 3DUS is an alternative emerging modality with the potential to alleviate many of these problems. This paper presents a new automatic framework for skull segmentation in fetal 3DUS. We propose a two-stage convolutional neural network (CNN) able to incorporate additional contextual and structural information into the segmentation process. In the first stage of the CNN, a partial reconstruction of the skull is obtained, segmenting only those regions visible in the original US volume. From this initial segmentation, two additional channels of information are computed inspired by the underlying physics of US image acquisition: an angle incidence map and a shadow casting map. These additional information channels are combined in the second stage of the CNN to provide a complete segmentation of the skull, able to compensate for the fading and shadowing artefacts observed in the original US image. The performance of the new segmentation architecture was evaluated on a dataset of 66 cases, obtaining an average Dice coefficient of 0.83 ± 0.06. Finally, we also evaluated the clinical potential of the new 3DUS-based analysis framework for the assessment of cranial deformation, significantly outperforming traditional 2D biometrics (100% vs. 50% specificity, respectively).

Conference paper

Oksuz I, Ruijsink B, Puyol-Anton E, Sinclair M, Rueckert D, Schnabel JA, King APet al., 2018, Automatic left ventricular outflow tract classification for accurate cardiac MR planning, 15th IEEE International Symposium on Biomedical Imaging (ISBI), Publisher: Institute of Electrical and Electronics Engineers, Pages: 462-465, ISSN: 1945-7928

Cardiac MR planning is important to ensure high quality image data and to enable accurate quantification of cardiac function. One result of inaccurate planning is an `off-axis' orientation of the 4-chamber view, often recognized by the presence of the left ventricular outflow tract (LVOT). This can lead to difficulties in assessment of atrial volumes and septal wall motion, either manually by experts or by automated image analysis algorithms. For large datasets such as the UK biobank, manual labelling is tedious and automated analysis pipelines including automatic image quality assessment need to be developed. In this paper, we propose a method to automatically detect the presence of the LVOT in cardiac MRI, which can aid identifying poorly planned 4-chamber images. Our method is based on Convolutional Neural Networks (CNNs) and is able to detect LVOT in 4-chamber images in less than 1ms. We test our algorithm on a subset of the UK biobank dataset (246 cardiac MR images) and achieve an average accuracy of 83%. We compare our approach to a range of state of the art classification methods.

Conference paper

Oktay O, Schlemper J, Folgoc LL, Lee MCH, Heinrich MP, Misawa K, Mori K, McDonagh SG, Hammerla NY, Kainz B, Glocker B, Rueckert Det al., 2018, Attention U-Net: Learning Where to Look for the Pancreas., International Conference on Medical Imaging with Deep Learning (MIDL)

Conference paper

Schlemper J, Oktay O, Chen L, Matthew J, Knight CL, Kainz B, Glocker B, Rueckert Det al., 2018, Attention-Gated Networks for Improving Ultrasound Scan Plane Detection., International Conference on Medical Imaging with Deep Learning (MIDL)

Conference paper

Chen L, Bentley P, Mori K, Misawa K, Fujiwara M, Rueckert Det al., 2018, DRINet for medical image segmentation, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 2453-2462, ISSN: 0278-0062

Convolutional neural networks (CNNs) have revolutionized medical image analysis over the past few years. The UNet architecture is one of the most well-known CNN architectures for semantic segmentation and has achieved remarkable successes in many different medical image segmentation applications. The U-Net architecture consists of standard convolution layers, pooling layers, and upsampling layers. These convolution layers learn representative features of input images and construct segmentations based on the features. However, the features learned by standard convolution layers are not distinctive when the differences among different categories are subtle in terms of intensity, location, shape, and size. In this paper, we propose a novel CNN architecture, called Dense-Res-Inception Net (DRINet), which addresses this challenging problem. The proposed DRINet consists of three blocks, namely a convolutional block with dense connections, a deconvolutional block with residual Inception modules, and an unpooling block. Our proposed architecture outperforms the U-Net in three different challenging applications, namely multi-class segmentation of cerebrospinal fluid (CSF) on brain CT images, multi-organ segmentation on abdominal CT images, multi-class brain tumour segmentation on MR images.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00172041&limit=30&person=true&page=4&respub-action=search.html