Imperial College London

Dr Ben Glocker

Faculty of EngineeringDepartment of Computing

Professor in Machine Learning for Imaging
 
 
 
//

Contact

 

+44 (0)20 7594 8334b.glocker Website CV

 
 
//

Location

 

377Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

352 results found

Arslan S, Ktena SI, Glocker B, Rueckert Det al., 2018, Graph saliency maps through spectral convolutional networks: application to sex classification with brain connectivity, International Workshop on Graphs in Biomedical Image Analysis, Publisher: Springer Verlag, ISSN: 0302-9743

Graph convolutional networks (GCNs) allow to apply traditional convolution operations in non-Euclidean domains, where data are commonly modelled as irregular graphs. Medical imaging and, in particular, neuroscience studies often rely on such graph representations, with brain connectivity networks being a characteristic example, while ultimately seeking the locus of phenotypic or disease-related differences in the brain. These regions of interest (ROIs) are, then, considered to be closely associated with function and/or behaviour. Driven by this, we explore GCNs for the task of ROI identification and propose a visual attribution method based on class activation mapping. By undertaking a sex classification task as proof of concept, we show that this method can be used to identify salient nodes (brain regions) without prior node labels. Based on experiments conducted on neuroimaging data of more than 5000 participants from UK Biobank, we demonstrate the robustness of the proposed method in highlighting reproducible regions across individuals. We further evaluate the neurobiological relevance of the identified regions based on evidence from large-scale UK Biobank studies.

Conference paper

Valindria V, Lavdas I, Cerrolaza J, Aboagye EO, Rockall A, Rueckert D, Glocker Bet al., 2018, Small organ segmentation in whole-body MRI using a two-stage FCN and weighting schemes, International Workshop on Machine Learning in Medical Imaging (MLMI) 2018, Publisher: Springer Verlag, Pages: 346-354, ISSN: 0302-9743

Accurate and robust segmentation of small organs in whole-body MRI is difficult due to anatomical variation and class imbalance. Recent deep network based approaches have demonstrated promising performance on abdominal multi-organ segmentations. However, the performance on small organs is still suboptimal as these occupy only small regions of the whole-body volumes with unclear boundaries and variable shapes. A coarse-to-fine, hierarchical strategy is a common approach to alleviate this problem, however, this might miss useful contextual information. We propose a two-stage approach with weighting schemes based on auto-context and spatial atlas priors. Our experiments show that the proposed approach can boost the segmentation accuracy of multiple small organs in whole-body MRI scans.

Conference paper

Ferrante E, Oktay O, Glocker B, Milone DHet al., 2018, On the adaptability of unsupervised CNN-based deformable image registration to unseen image domains, International Workshop on Machine Learning in Medical Imaging (MLMI), Publisher: Springer Verlag, Pages: 294-302, ISSN: 0302-9743

Deformable image registration is a fundamental problem in medical image analysis. During the last years, several methods based on deep convolutional neural networks (CNN) proved to be highly accurate to perform this task. These models achieved state-of-the-art accuracy while drastically reducing the required computational time, but mainly focusing on images of specific organs and modalities. To date, no work has reported on how these models adapt across different domains. In this work, we ask the question: can we use CNN-based registration models to spatially align images coming from a domain different than the one/s used at training time? We explore the adaptability of CNN-based image registration to different organs/modalities. We employ a fully convolutional architecture trained following an unsupervised approach. We consider a simple transfer learning strategy to study the generalisation of such model to unseen target domains, and devise a one-shot learning scheme taking advantage of the unsupervised nature of the proposed method. Evaluation on two publicly available datasets of X-Ray lung images and cardiac cine magnetic resonance sequences is provided. Our experiments suggest that models learned in different domains can be transferred at the expense of a decrease in performance, and that one-shot learning in the context of unsupervised CNN-based registration is a valid alternative to achieve consistent registration performance when only a pair of images from the target domain is available.

Conference paper

Robinson R, Oktay O, Bai W, Valindria V, Sanghvi MM, Aung N, Paiva JM, Zemrak F, Fung K, Lukaschuk E, Lee AM, Carapella V, Kim YJ, Kainz B, Piechnik SK, Neubauer S, Petersen SE, Page C, Rueckert D, Glocker Bet al., 2018, Real-time prediction of segmentation quality, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 578-585, ISSN: 0302-9743

Recent advances in deep learning based image segmentationmethods have enabled real-time performance with human-level accuracy.However, occasionally even the best method fails due to low image qual-ity, artifacts or unexpected behaviour of black box algorithms. Beingable to predict segmentation quality in the absence of ground truth is ofparamount importance in clinical practice, but also in large-scale studiesto avoid the inclusion of invalid data in subsequent analysis.In this work, we propose two approaches of real-time automated qualitycontrol for cardiovascular MR segmentations using deep learning. First,we train a neural network on 12,880 samples to predict Dice SimilarityCoefficients (DSC) on a per-case basis. We report a mean average error(MAE) of 0.03 on 1,610 test samples and 97% binary classification accu-racy for separating low and high quality segmentations. Secondly, in thescenario where no manually annotated data is available, we train a net-work to predict DSC scores from estimated quality obtained via a reversetesting strategy. We report an MAE = 0.14 and 91% binary classifica-tion accuracy for this case. Predictions are obtained in real-time which,when combined with real-time segmentation methods, enables instantfeedback on whether an acquired scan is analysable while the patient isstill in the scanner. This further enables new applications of optimisingimage acquisition towards best possible analysis results.

Conference paper

Hou B, Khanal B, Alansary A, McDonagh S, Davidson A, Rutherford M, Hajnal J, Rueckert D, Glocker B, Kainz Bet al., 2018, 3D reconstruction in canonical co-ordinate space from arbitrarily oriented 2D images, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 1737-1750, ISSN: 0278-0062

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2D/3D image registration methods, can significantly degrade the performance of 3D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion such as fetal in-utero imaging, complicate the 3D image and volume reconstruction process. In this paper we present a learning based image registration method capable of predicting 3D rigid transformations of arbitrarily oriented 2D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations we utilize a Convolutional Neural Network (CNN) architecture to learn the regression function capable of mapping 2D image slices to a 3D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2D/3D registration initialization problem and is suitable for real-time scenarios.

Journal article

Parisot S, Ktena SI, Ferrante E, Lee M, Guerrero R, Glocker B, Rueckert Det al., 2018, Disease prediction using graph convolutional networks: application to Autism Spectrum Disorder and Alzheimer's disease, Medical Image Analysis, Vol: 48, Pages: 117-130, ISSN: 1361-8415

Graphs are widely used as a natural framework that captures interactionsbetween individual elements represented as nodes in a graph. In medicalapplications, specifically, nodes can represent individuals within apotentially large population (patients or healthy controls) accompanied by aset of features, while the graph edges incorporate associations betweensubjects in an intuitive manner. This representation allows to incorporate thewealth of imaging and non-imaging information as well as individual subjectfeatures simultaneously in disease classification tasks. Previous graph-basedapproaches for supervised or unsupervised learning in the context of diseaseprediction solely focus on pairwise similarities between subjects, disregardingindividual characteristics and features, or rather rely on subject-specificimaging feature vectors and fail to model interactions between them. In thispaper, we present a thorough evaluation of a generic framework that leveragesboth imaging and non-imaging information and can be used for brain analysis inlarge populations. This framework exploits Graph Convolutional Networks (GCNs)and involves representing populations as a sparse graph, where its nodes areassociated with imaging-based feature vectors, while phenotypic information isintegrated as edge weights. The extensive evaluation explores the effect ofeach individual component of this framework on disease prediction performanceand further compares it to different baselines. The framework performance istested on two large datasets with diverse underlying data, ABIDE and ADNI, forthe prediction of Autism Spectrum Disorder and conversion to Alzheimer'sdisease, respectively. Our analysis shows that our novel framework can improveover state-of-the-art results on both databases, with 70.4% classificationaccuracy for ABIDE and 80.0% for ADNI.

Journal article

Korkinof D, Rijken T, O'Neill M, Yearsley J, Harvey H, Glocker Bet al., 2018, High-resolution mammogram synthesis using progressive generative adversarial networks

The ability to generate synthetic medical images is useful for data augmentation, domain transfer, and out-of-distribution detection. However, generating realistic, high-resolution medical images is challenging, particularly for Full Field Digital Mammograms (FFDM), due to the textural heterogeneity, fine structural details and specific tissue properties. In this paper, we explore the use of progressively trained generative adversarial networks (GANs) to synthesize mammograms, overcoming the underlying instabilities when training such adversarial models. This work is the first to show that generation of realistic synthetic medical images is feasible at up to 1280x1024 pixels, the highest resolution achieved for medical image synthesis, enabling visualizations within standard mammographic hanging protocols. We hope this work can serve as a useful guide and facilitate further research on GANs in the medical imaging domain.

Working paper

Valindria VV, Lavdas I, Bai W, Kamnitsas K, Aboagye EO, Rockall AG, Rueckert D, Glocker Bet al., 2018, Domain adaptation for MRI organ segmentation using reverse classification accuracy, International Conference on Medical Imaging with Deep Learning (MIDL)

The variations in multi-center data in medical imaging studies have broughtthe necessity of domain adaptation. Despite the advancement of machine learningin automatic segmentation, performance often degrades when algorithms areapplied on new data acquired from different scanners or sequences than thetraining data. Manual annotation is costly and time consuming if it has to becarried out for every new target domain. In this work, we investigate automaticselection of suitable subjects to be annotated for supervised domain adaptationusing the concept of reverse classification accuracy (RCA). RCA predicts theperformance of a trained model on data from the new domain and differentstrategies of selecting subjects to be included in the adaptation via transferlearning are evaluated. We perform experiments on a two-center MR database forthe task of organ segmentation. We show that subject selection via RCA canreduce the burden of annotation of new data for the target domain.

Conference paper

Rajchl M, Pawlowski N, Rueckert D, Matthews PM, Glocker Bet al., 2018, NeuroNet: fast and robust reproduction of multiple brain Image segmentation pipelines, International Conference on Medical Imaging with Deep Learning (MIDL), Publisher: MIDL

NeuroNet is a deep convolutional neural network mimicking multiple popularand state-of-the-art brain segmentation tools including FSL, SPM, and MALPEM.The network is trained on 5,000 T1-weighted brain MRI scans from the UK BiobankImaging Study that have been automatically segmented into brain tissue andcortical and sub-cortical structures using the standard neuroimaging pipelines.Training a single model from these complementary and partially overlappinglabel maps yields a new powerful "all-in-one", multi-output segmentation tool.The processing time for a single subject is reduced by an order of magnitudecompared to running each individual software package. We demonstrate very goodreproducibility of the original outputs while increasing robustness tovariations in the input data. We believe NeuroNet could be an important tool inlarge-scale population imaging studies and serve as a new standard inneuroscience by reducing the risk of introducing bias when choosing a specificsoftware package.

Conference paper

Chen X, Pawlowski N, Rajchl M, Glocker B, Konukoglu Eet al., 2018, Deep generative models in the real-world: an open challenge from medical imaging

Recent advances in deep learning led to novel generative modeling techniques that achieve unprecedented quality in generated samples and performance in learning complex distributions in imaging data. These new models in medical image computing have important applications that form clinically relevant and very challenging unsupervised learning problems. In this paper, we explore the feasibility of using state-of-the-art auto-encoder-based deep generative models, such as variational and adversarial auto-encoders, for one such task: abnormality detection in medical imaging. We utilize typical, publicly available datasets with brain scans from healthy subjects and patients with stroke lesions and brain tumors. We use the data from healthy subjects to train different auto-encoder based models to learn the distribution of healthy images and detect pathologies as outliers. Models that can better learn the data distribution should be able to detect outliers more accurately. We evaluate the detection performance of deep generative models and compare them with non-deep learning based approaches to provide a benchmark of the current state of research. We conclude that abnormality detection is a challenging task for deep generative models and large room exists for improvement. In order to facilitate further research, we aim to provide carefully pre-processed imaging data available to the research community.

Working paper

Oktay O, Schlemper J, Folgoc LL, Lee MCH, Heinrich MP, Misawa K, Mori K, McDonagh SG, Hammerla NY, Kainz B, Glocker B, Rueckert Det al., 2018, Attention U-Net: Learning Where to Look for the Pancreas., International Conference on Medical Imaging with Deep Learning (MIDL)

Conference paper

Schlemper J, Oktay O, Chen L, Matthew J, Knight CL, Kainz B, Glocker B, Rueckert Det al., 2018, Attention-Gated Networks for Improving Ultrasound Scan Plane Detection., International Conference on Medical Imaging with Deep Learning (MIDL)

Conference paper

Valindria V, Pawlowski N, Rajchl M, Lavdas I, Aboagye EO, Rockall A, Rueckert D, Glocker Bet al., 2018, Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI, IEEE Winter Conference on Applications of Computer Vision, Publisher: IEEE

Convolutional neural networks have been widely used in medical image segmentation. The amount of training data strongly determines the overall performance. Most approaches are applied for a single imaging modality, e.g., brain MRI. In practice, it is often difficult to acquire sufficient training data of a certain imaging modality. The same anatomical structures, however, may be visible in different modalities such as major organs on abdominal CT and MRI. In this work, we investigate the effectiveness of learning from multiple modalities to improve the segmentation accuracy on each individual modality. We study the feasibility of using a dual-stream encoder-decoder architecture to learn modality-independent, and thus, generalisable and robust features. All of our MRI and CT data are unpaired, which means they are obtained from different subjects and not registered to each other. Experiments show that multi-modal learning can improve overall accuracy over modality-specific training. Results demonstrate that information across modalities can in particular improve performance on varying structures such as the spleen.

Conference paper

Kamnitsas K, Bai W, Ferrante E, McDonagh SG, Sinclair M, Pawlowski N, Rajchl M, Lee M, Kainz B, Rueckert D, Glocker Bet al., 2018, Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation, MICCAI BrainLes Workshop

Conference paper

Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook S, de Marvao A, Dawes T, O'Regan D, Kainz B, Glocker B, Rueckert Det al., 2018, Anatomically Constrained Neural Networks (ACNN): application to cardiac image enhancement and segmentation, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 384-395, ISSN: 0278-0062

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac datasets and public benchmarks. Additionally, we demonstrate how the learnt deep models of 3D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.

Journal article

Glocker B, Yao J, Vrtovec T, Frangi AF, Zheng Get al., 2018, Preface

Other

Kamnitsas K, Castro DCD, Folgoc LL, Walker I, Tanno R, Rueckert D, Glocker B, Criminisi A, Nori AVet al., 2018, Semi-Supervised Learning via Compact Latent Space Clustering., Publisher: PMLR, Pages: 2464-2473

Conference paper

Ktena SI, Parisot S, Ferrante E, Rajchl M, Lee M, Glocker B, Rueckert Det al., 2017, Metric learning with spectral graph convolutions on brain connectivity networks., NeuroImage, Vol: 169, Pages: 431-442, ISSN: 1053-8119

Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods.

Journal article

Maas AIR, Menon DK, Adelson PD, Andelic N, Bell MJ, Belli A, Bragge P, Brazinova A, Büki A, Chesnut RM, Citerio G, Coburn M, Cooper DJ, Crowder AT, Czeiter E, Czosnyka M, Diaz-Arrastia R, Dreier JP, Duhaime AC, Ercole A, van Essen TA, Feigin VL, Gao G, Giacino J, Gonzalez-Lara LE, Gruen RL, Gupta D, Hartings JA, Hill S, Jiang JY, Ketharanathan N, Kompanje EJO, Lanyon L, Laureys S, Lecky F, Levin H, Lingsma HF, Maegele M, Majdan M, Manley G, Marsteller J, Mascia L, McFadyen C, Mondello S, Newcombe V, Palotie A, Parizel PM, Peul W, Piercy J, Polinder Set al., 2017, Traumatic brain injury: integrated approaches to improve prevention, clinical care, and research, The Lancet Neurology, Vol: 16, Pages: 987-1048, ISSN: 1474-4422

Journal article

Ledig C, Kamnitsas K, Koikkalainen J, Posti JP, Takala RSK, Katila A, Frantzén J, Ala-Seppälä H, Kyllönen A, Maanpää H-R, Tallus J, Lötjönen J, Glocker B, Tenovuo O, Rueckert Det al., 2017, Regional brain morphometry in patients with traumatic brain injury based on acute- and chronic-phase magnetic resonance imaging., PLoS ONE, Vol: 12, ISSN: 1932-6203

Traumatic brain injury (TBI) is caused by a sudden external force and can be very heterogeneous in its manifestation. In this work, we analyse T1-weighted magnetic resonance (MR) brain images that were prospectively acquired from patients who sustained mild to severe TBI. We investigate the potential of a recently proposed automatic segmentation method to support the outcome prediction of TBI. Specifically, we extract meaningful cross-sectional and longitudinal measurements from acute- and chronic-phase MR images. We calculate regional volume and asymmetry features at the acute/subacute stage of the injury (median: 19 days after injury), to predict the disability outcome of 67 patients at the chronic disease stage (median: 229 days after injury). Our results indicate that small structural volumes in the acute stage (e.g. of the hippocampus, accumbens, amygdala) can be strong predictors for unfavourable disease outcome. Further, group differences in atrophy are investigated. We find that patients with unfavourable outcome show increased atrophy. Among patients with severe disability outcome we observed a significantly higher mean reduction of cerebral white matter (3.1%) as compared to patients with low disability outcome (0.7%).

Journal article

Pawlowski N, Ktena SI, Lee MCH, Kainz B, Rueckert D, Glocker B, Rajchl Met al., 2017, DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images

We present DLTK, a toolkit providing baseline implementations for efficientexperimentation with deep learning methods on biomedical images. It builds ontop of TensorFlow and its high modularity and easy-to-use examples allow for alow-threshold access to state-of-the-art implementations for typical medicalimaging problems. A comparison of DLTK's reference implementations of popularnetwork architectures for image segmentation demonstrates new top performanceon the publicly available challenge data "Multi-Atlas Labeling Beyond theCranial Vault". The average test Dice similarity coefficient of $81.5$ exceedsthe previously best performing CNN ($75.7$) and the accuracy of the challengewinning method ($79.0$).

Working paper

Suzuki HS, Gao HG, Bai WB, Evangelou EE, Glocker BG, O'regan DO, Elliott PE, Matthews PMMet al., 2017, Abnormal brain white matter microstructure is associated withboth pre-hypertension and hypertension, PLoS ONE, Vol: 12, ISSN: 1932-6203

ObjectivesTo characterize effects of chronically elevated blood pressure on the brain, we tested for brain white matter microstructural differences associated with normotension, pre-hypertension and hypertension in recently available brain magnetic resonance imaging data from 4659 participants without known neurological or psychiatric disease (62.3±7.4 yrs, 47.0% male) in UK Biobank.MethodsFor assessment of white matter microstructure, we used measures derived from neurite orientation dispersion and density imaging (NODDI) including the intracellular volume fraction (an estimate of neurite density) and isotropic volume fraction (an index of the relative extra-cellular water diffusion). To estimate differences associated specifically with blood pressure, we applied propensity score matching based on age, sex, educational level, body mass index, and history of smoking, diabetes mellitus and cardiovascular disease to perform separate contrasts of non-hypertensive (normotensive or pre-hypertensive, N = 2332) and hypertensive (N = 2337) individuals and of normotensive (N = 741) and pre-hypertensive (N = 1581) individuals (p<0.05 after Bonferroni correction).ResultsThe brain white matter intracellular volume fraction was significantly lower, and isotropic volume fraction was higher in hypertensive relative to non-hypertensive individuals (N = 1559, each). The white matter isotropic volume fraction also was higher in pre-hypertensive than in normotensive individuals (N = 694, each) in the right superior longitudinal fasciculus and the right superior thalamic radiation, where the lower intracellular volume fraction was observed in the hypertensives relative to the non-hypertensive group.SignificancePathological processes associated with chronically elevated blood pressure are associated with imaging differences suggesting chronic alterations of white matter axonal structure that may affect cognitive functions even with pre-hypertension.

Journal article

Pawlowski N, Brock A, Lee MCH, Rajchl M, Glocker Bet al., 2017, Implicit Weight Uncertainty in Neural Networks

Modern neural networks tend to be overconfident on unseen, noisy orincorrectly labelled data and do not produce meaningful uncertainty measures.Bayesian deep learning aims to address this shortcoming with variationalapproximations (such as Bayes by Backprop or Multiplicative Normalising Flows).However, current approaches have limitations regarding flexibility andscalability. We introduce Bayes by Hypernet (BbH), a new method of variationalapproximation that interprets hypernetworks as implicit distributions. Itnaturally uses neural networks to model arbitrarily complex distributions andscales to modern deep learning architectures. In our experiments, wedemonstrate that our method achieves competitive accuracies and predictiveuncertainties on MNIST and a CIFAR5 task, while being the most robust againstadversarial attacks.

Working paper

Robinson EC, Garcia K, Glasser MF, Chen Z, Coalson TS, Makropoulos A, Bozek J, Wright R, Schuh A, Webster M, Hutter J, Price A, Grande LC, Hughes E, Tusor N, Bayly PV, Van Essen DC, Smith SM, Edwards AD, Hajnal J, Jenkinson M, Glocker B, Rueckert Det al., 2017, Multimodal surface matching with higher-order smoothness constraints., NeuroImage, Vol: 167, Pages: 453-465, ISSN: 1053-8119

In brain imaging, accurate alignment of cortical surfaces is fundamental to the statistical sensitivity and spatial localisation of group studies; and cortical surface-based alignment has generally been accepted to be superior to volume-based approaches at aligning cortical areas. However, human subjects have considerable variation in cortical folding, and in the location of functional areas relative to these folds. This makes alignment of cortical areas a challenging problem. The Multimodal Surface Matching (MSM) tool is a flexible, spherical registration approach that enables accurate registration of surfaces based on a variety of different features. Using MSM, we have previously shown that driving cross-subject surface alignment, using areal features, such as resting state-networks and myelin maps, improves group task fMRI statistics and map sharpness. However, the initial implementation of MSM's regularisation function did not penalize all forms of surface distortion evenly. In some cases, this allowed peak distortions to exceed neurobiologically plausible limits, unless regularisation strength was increased to a level which prevented the algorithm from fully maximizing surface alignment. Here we propose and implement a new regularisation penalty, derived from physically relevant equations of strain (deformation) energy, and demonstrate that its use leads to improved and more robust alignment of multimodal imaging data. In addition, since spherical warps incorporate projection distortions that are unavoidable when mapping from a convoluted cortical surface to the sphere, we also propose constraints that enforce smooth deformation of cortical anatomies. We test the impact of this approach for longitudinal modelling of cortical development for neonates (born between 31 and 43 weeks of post-menstrual age) and demonstrate that the proposed method increases the biological interpretability of the distortion fields and improves the statistical significance of populatio

Journal article

Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Zemrak F, Fung K, Paiva JM, Carapella V, Kim YJ, Suzuki H, Kainz B, Matthews PM, Petersen SE, Piechnik SK, Neubauer S, Glocker B, Rueckert Det al., 2017, Automated cardiovascular magnetic resonance image analysis with fully convolutional networks

Cardiovascular magnetic resonance (CMR) imaging is a standard imagingmodality for assessing cardiovascular diseases (CVDs), the leading cause ofdeath globally. CMR enables accurate quantification of the cardiac chambervolume, ejection fraction and myocardial mass, providing information fordiagnosis and monitoring of CVDs. However, for years, clinicians have beenrelying on manual approaches for CMR image analysis, which is time consumingand prone to subjective errors. It is a major clinical challenge toautomatically derive quantitative and clinically relevant information from CMRimages. Deep neural networks have shown a great potential in image patternrecognition and segmentation for a variety of tasks. Here we demonstrate anautomated analysis method for CMR images, which is based on a fullyconvolutional network (FCN). The network is trained and evaluated on alarge-scale dataset from the UK Biobank, consisting of 4,875 subjects with93,500 pixelwise annotated images. The performance of the method has beenevaluated using a number of technical metrics, including the Dice metric, meancontour distance and Hausdorff distance, as well as clinically relevantmeasures, including left ventricle (LV) end-diastolic volume (LVEDV) andend-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolicvolume (RVEDV) and end-systolic volume (RVESV). By combining FCN with alarge-scale annotated dataset, the proposed automated method achieves a highperformance on par with human experts in segmenting the LV and RV on short-axisCMR images and the left atrium (LA) and right atrium (RA) on long-axis CMRimages.

Journal article

Lavdas I, Glocker B, Kamnitsas K, Rueckert D, Mair H, Sandhu A, Taylor SA, Aboagye EO, Rockall AGet al., 2017, Fully automatic, multiorgan segmentation in normal whole body magnetic resonance imaging (MRI), using classification forests (CFs), convolutional neural networks (CNNs), and a multi-atlas (MA) approach., Med Phys, Vol: 44, Pages: 5210-5220

PURPOSE: As part of a program to implement automatic lesion detection methods for whole body magnetic resonance imaging (MRI) in oncology, we have developed, evaluated, and compared three algorithms for fully automatic, multiorgan segmentation in healthy volunteers. METHODS: The first algorithm is based on classification forests (CFs), the second is based on 3D convolutional neural networks (CNNs) and the third algorithm is based on a multi-atlas (MA) approach. We examined data from 51 healthy volunteers, scanned prospectively with a standardized, multiparametric whole body MRI protocol at 1.5 T. The study was approved by the local ethics committee and written consent was obtained from the participants. MRI data were used as input data to the algorithms, while training was based on manual annotation of the anatomies of interest by clinical MRI experts. Fivefold cross-validation experiments were run on 34 artifact-free subjects. We report three overlap and three surface distance metrics to evaluate the agreement between the automatic and manual segmentations, namely the dice similarity coefficient (DSC), recall (RE), precision (PR), average surface distance (ASD), root-mean-square surface distance (RMSSD), and Hausdorff distance (HD). Analysis of variances was used to compare pooled label metrics between the three algorithms and the DSC on a 'per-organ' basis. A Mann-Whitney U test was used to compare the pooled metrics between CFs and CNNs and the DSC on a 'per-organ' basis, when using different imaging combinations as input for training. RESULTS: All three algorithms resulted in robust segmenters that were effectively trained using a relatively small number of datasets, an important consideration in the clinical setting. Mean overlap metrics for all the segmented structures were: CFs: DSC = 0.70 ± 0.18, RE = 0.73 ± 0.18, PR = 0.71 ± 0.14, CNNs: DSC = 0.81 ± 0.13, RE = 0.83 ± 0.14, PR = 0.82 ± 0.10, MA: DSC = 0.71 ± 0

Journal article

Damopoulos D, Glocker B, Zheng G, 2017, Automatic Localization of the Lumbar Vertebral Landmarks in CT Images with Context Features, Computational Methods and Clinical Applications in Musculoskeletal Imaging (MSKI)

A recent research direction for the localization of anatomical landmarkswith learning-based methods is to explore ways to enrich the trained modelswith context information. Lately, the addition of context features in regression-basedapproaches has been tried in the literature. In this work, a method ispresented for the addition of context features in a regression setting where thelocations of many vertebral landmarks are regressed all at once. As this methodrelies on the knowledge of the centers of the vertebral bodies (VBs), an automatic,endplate-based approach for the localization of the VB centers is also presented.The proposed methods are evaluated on a dataset of 28 lumbar-focused CT images.The VB localization method detects all of the lumbar VBs of the testing setwith a mean localization error of 3.2 mm. The multi-landmark localizationmethod is tested on the task of localizing the tips of all the inferior articular processesof the lumbar vertebrae, in addition to their VB centers. The proposedmethod detects these landmarks with a mean localization error of 3.0 mm.

Conference paper

Kanavati F, Misawa K, Fujiwara M, Mori K, Rueckert D, Glocker Bet al., 2017, Joint Supervoxel Classification Forest for Weakly-Supervised Organ Segmentation, International Workshop on Machine Learning in Medical Imaging (MLMI)

Conference paper

Parisot S, Glocker B, Ktena SI, Arslan S, Schirmer MD, Rueckert Det al., 2017, A flexible graphical model for multi-modal parcellation of the cortex., NeuroImage, Vol: 162, Pages: 226-248, ISSN: 1053-8119

Advances in neuroimaging have provided a tremendous amount of in-vivo information on the brain's organisation. Its anatomy and cortical organisation can be investigated from the point of view of several imaging modalities, many of which have been studied for mapping functionally specialised cortical areas. There is strong evidence that a single modality is not sufficient to fully identify the brain's cortical organisation. Combining multiple modalities in the same parcellation task has the potential to provide more accurate and robust subdivisions of the cortex. Nonetheless, existing brain parcellation methods are typically developed and tested on single modalities using a specific type of information. In this paper, we propose Graph-based Multi-modal Parcellation (GraMPa), an iterative framework designed to handle the large variety of available input modalities to tackle the multi-modal parcellation task. At each iteration, we compute a set of parcellations from different modalities and fuse them based on their local reliabilities. The fused parcellation is used to initialise the next iteration, forcing the parcellations to converge towards a set of mutually informed modality specific parcellations, where correspondences are established. We explore two different multi-modal configurations for group-wise parcellation using resting-state fMRI, diffusion MRI tractography, myelin maps and task fMRI. Quantitative and qualitative results on the Human Connectome Project database show that integrating multi-modal information yields a stronger agreement with well established atlases and more robust connectivity networks that provide a better representation of the population.

Journal article

Parisot S, Ktena SI, Ferrante E, Lee M, Moreno RG, Glocker B, Rueckert Det al., 2017, Spectral graph convolutions for population-based disease prediction, Medical Image Computing and Computer Assisted Intervention - MICCAI 2017, Publisher: Springer, Pages: 177-185, ISSN: 0302-9743

Exploiting the wealth of imaging and non-imaging information for disease prediction tasks requires models capable of representing, at the same time, individual features as well as data associations between subjects from potentially large populations. Graphs provide a natural framework for such tasks, yet previous graph-based approaches focus on pairwise similarities without modelling the subjects’ individual characteristics and features. On the other hand, relying solely on subject-specific imaging feature vectors fails to model the interaction and similarity between subjects, which can reduce performance. In this paper, we introduce the novel concept of Graph Convolutional Networks (GCN) for brain analysis in populations, combining imaging and non-imaging data. We represent populations as a sparse graph where its vertices are associated with image-based feature vectors and the edges encode phenotypic information. This structure was used to train a GCN model on partially labelled graphs, aiming to infer the classes of unlabelled nodes from the node features and pairwise associations between subjects. We demonstrate the potential of the method on the challenging ADNI and ABIDE databases, as a proof of concept of the benefit from integrating contextual information in classification tasks. This has a clear impact on the quality of the predictions, leading to 69.5% accuracy for ABIDE (outperforming the current state of the art of 66.8%) and 77% for ADNI for prediction of MCI conversion, significantly outperforming standard linear classifiers where only individual features are considered.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00795421&limit=30&person=true&page=8&respub-action=search.html