Imperial College London

DrMartinRajchl

Faculty of EngineeringDepartment of Computing

Honorary Research Fellow
 
 
 
//

Contact

 

m.rajchl Website CV

 
 
//

Location

 

Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

93 results found

Zabihollahy F, Rajchl M, White JA, Ukwatta Eet al., 2020, Fully automated segmentation of left ventricular scar from 3D late gadolinium enhancement magnetic resonance imaging using a cascaded multi-planar U-Net (CMPU-Net), MEDICAL PHYSICS, Vol: 47, Pages: 1645-1655, ISSN: 0094-2405

Journal article

Meng Q, Zimmer V, Hou B, Rajchl M, Toussaint N, Oktay O, Schlemper J, Gomez A, Housden J, Matthew J, Rueckert D, Schnabel JA, Kainz Bet al., 2019, Weakly supervised estimation of shadow confidence maps in fetal ultrasound imaging, IEEE Transactions on Medical Imaging, Vol: 38, Pages: 2755-2767, ISSN: 0278-0062

Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions using learning-based algorithms is challenging because pixel-wise ground truth annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions. Our method is able to generate a dense shadow-focused confidence map. In our method, a shadow-seg module is built to learn general shadow features for shadow segmentation, based on global image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is introduced to extend the obtained binary shadow segmentation to a reference confidence map. Additionally, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This network is able to predict shadow confidence maps directly from input images during inference. We use evaluation metrics such as DICE, inter-class correlation and etc. to verify the effectiveness of our method. Our method is more consistent than human annotation, and outperforms the state-of-the-art quantitatively in shadow segmentation and qualitatively in confidence estimation of shadow regions. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.

Journal article

Wang C, Rajchl M, Chan ADC, Ukwatta Eet al., 2019, An ensemble of U-Net architecture variants for left atrial segmentation, Conference on Medical Imaging - Computer-Aided Diagnosis, Publisher: SPIE-INT SOC OPTICAL ENGINEERING, ISSN: 0277-786X

Conference paper

Biffi C, Oktay O, Tarroni G, Bai W, De Marvao A, Doumou G, Rajchl M, Bedair R, Prasad S, Cook S, O’Regan D, Rueckert Det al., 2018, Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer, Pages: 464-471, ISSN: 0302-9743

Alterations in the geometry and function of the heart define well-established causes of cardiovascular disease. However, current approaches to the diagnosis of cardiovascular diseases often rely on subjective human assessment as well as manual analysis of medical images. Both factors limit the sensitivity in quantifying complex structural and functional phenotypes. Deep learning approaches have recently achieved success for tasks such as classification or segmentation of medical images, but lack interpretability in the feature extraction and decision processes, limiting their value in clinical diagnosis. In this work, we propose a 3D convolutional generative model for automatic classification of images from patients with cardiac diseases associated with structural remodeling. The model leverages interpretable task-specific anatomic patterns learned from 3D segmentations. It further allows to visualise and quantify the learned pathology-specific remodeling patterns in the original input space of the images. This approach yields high accuracy in the categorization of healthy and hypertrophic cardiomyopathy subjects when tested on unseen MR images from our own multi-centre dataset (100%) as well on the ACDC MICCAI 2017 dataset (90%). We believe that the proposed deep learning approach is a promising step towards the development of interpretable classifiers for the medical imaging domain, which may help clinicians to improve diagnostic accuracy and enhance patient risk-stratification.

Conference paper

Meng Q, Baumgartner C, Sinclair M, Housden J, Rajchl M, Gomez A, Hou B, Toussaint N, Zimmer V, Tan J, Matthew J, Rueckert D, Schnabel J, Kainz Bet al., 2018, Automatic shadow detection in 2D ultrasound images, International Workshop on Preterm, Perinatal and Paediatric Image Analysis, Pages: 66-75, ISSN: 0302-9743

Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fetal screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.

Conference paper

Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Zemrak F, Fung K, Paiva JM, Carapella V, Kim YJ, Suzuki H, Kainz B, Matthews PM, Petersen SE, Piechnik SK, Neubauer S, Glocker B, Rueckert Det al., 2018, Automated cardiovascular magnetic resonance image analysis with fully convolutional networks, Journal of Cardiovascular Magnetic Resonance, Vol: 20, Pages: 1-12, ISSN: 1097-6647

Background: Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR imageanalysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images.Methods: Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV)end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV).Results: By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The meanabsolute difference between automated measurement and manual measurement was 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-ax

Journal article

Rajchl M, Pawlowski N, Rueckert D, Matthews PM, Glocker Bet al., 2018, NeuroNet: fast and robust reproduction of multiple brain Image segmentation pipelines, International Conference on Medical Imaging with Deep Learning (MIDL), Publisher: MIDL

NeuroNet is a deep convolutional neural network mimicking multiple popularand state-of-the-art brain segmentation tools including FSL, SPM, and MALPEM.The network is trained on 5,000 T1-weighted brain MRI scans from the UK BiobankImaging Study that have been automatically segmented into brain tissue andcortical and sub-cortical structures using the standard neuroimaging pipelines.Training a single model from these complementary and partially overlappinglabel maps yields a new powerful "all-in-one", multi-output segmentation tool.The processing time for a single subject is reduced by an order of magnitudecompared to running each individual software package. We demonstrate very goodreproducibility of the original outputs while increasing robustness tovariations in the input data. We believe NeuroNet could be an important tool inlarge-scale population imaging studies and serve as a new standard inneuroscience by reducing the risk of introducing bias when choosing a specificsoftware package.

Conference paper

Koch LM, Rajchl M, Bai W, Baumgartner CF, Tong T, Passerat-Palmbach J, Aljabar P, Rueckert Det al., 2018, Multi-atlas segmentation using partially annotated data: methods and annotation strategies, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 40, Pages: 1683-1696, ISSN: 0162-8828

Multi-atlas segmentation is a widely used tool in medical image analysis, providing robust and accurate results by learning from annotated atlas datasets. However, the availability of fully annotated atlas images for training is limited due to the time required for the labelling task. Segmentation methods requiring only a proportion of each atlas image to be labelled could therefore reduce the workload on expert raters tasked with annotating atlas images. To address this issue, we first re-examine the labelling problem common in many existing approaches and formulate its solution in terms of a Markov Random Field energy minimisation problem on a graph connecting atlases and the target image. This provides a unifying framework for multi-atlas segmentation. We then show how modifications in the graph configuration of the proposed framework enable the use of partially annotated atlas images and investigate different partial annotation strategies. The proposed method was evaluated on two Magnetic Resonance Imaging (MRI) datasets for hippocampal and cardiac segmentation. Experiments were performed aimed at (1) recreating existing segmentation techniques with the proposed framework and (2) demonstrating the potential of employing sparsely annotated atlas data for multi-atlas segmentation.

Journal article

Chen X, Pawlowski N, Rajchl M, Glocker B, Konukoglu Eet al., 2018, Deep generative models in the real-world: an open challenge from medical imaging

Recent advances in deep learning led to novel generative modeling techniques that achieve unprecedented quality in generated samples and performance in learning complex distributions in imaging data. These new models in medical image computing have important applications that form clinically relevant and very challenging unsupervised learning problems. In this paper, we explore the feasibility of using state-of-the-art auto-encoder-based deep generative models, such as variational and adversarial auto-encoders, for one such task: abnormality detection in medical imaging. We utilize typical, publicly available datasets with brain scans from healthy subjects and patients with stroke lesions and brain tumors. We use the data from healthy subjects to train different auto-encoder based models to learn the distribution of healthy images and detect pathologies as outliers. Models that can better learn the data distribution should be able to detect outliers more accurately. We evaluate the detection performance of deep generative models and compare them with non-deep learning based approaches to provide a benchmark of the current state of research. We conclude that abnormality detection is a challenging task for deep generative models and large room exists for improvement. In order to facilitate further research, we aim to provide carefully pre-processed imaging data available to the research community.

Working paper

Valindria V, Pawlowski N, Rajchl M, Lavdas I, Aboagye EO, Rockall A, Rueckert D, Glocker Bet al., 2018, Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI, IEEE Winter Conference on Applications of Computer Vision, Publisher: IEEE

Convolutional neural networks have been widely used in medical image segmentation. The amount of training data strongly determines the overall performance. Most approaches are applied for a single imaging modality, e.g., brain MRI. In practice, it is often difficult to acquire sufficient training data of a certain imaging modality. The same anatomical structures, however, may be visible in different modalities such as major organs on abdominal CT and MRI. In this work, we investigate the effectiveness of learning from multiple modalities to improve the segmentation accuracy on each individual modality. We study the feasibility of using a dual-stream encoder-decoder architecture to learn modality-independent, and thus, generalisable and robust features. All of our MRI and CT data are unpaired, which means they are obtained from different subjects and not registered to each other. Experiments show that multi-modal learning can improve overall accuracy over modality-specific training. Results demonstrate that information across modalities can in particular improve performance on varying structures such as the spleen.

Conference paper

Kamnitsas K, Bai W, Ferrante E, McDonagh SG, Sinclair M, Pawlowski N, Rajchl M, Lee M, Kainz B, Rueckert D, Glocker Bet al., 2018, Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation, MICCAI BrainLes Workshop

Conference paper

Ktena SI, Parisot S, Ferrante E, Rajchl M, Lee M, Glocker B, Rueckert Det al., 2017, Metric learning with spectral graph convolutions on brain connectivity networks., NeuroImage, Vol: 169, Pages: 431-442, ISSN: 1053-8119

Graph representations are often used to model structured data at an individual or population level and have numerous applications in pattern recognition problems. In the field of neuroscience, where such representations are commonly used to model structural or functional connectivity between a set of brain regions, graphs have proven to be of great importance. This is mainly due to the capability of revealing patterns related to brain development and disease, which were previously unknown. Evaluating similarity between these brain connectivity networks in a manner that accounts for the graph structure and is tailored for a particular application is, however, non-trivial. Most existing methods fail to accommodate the graph structure, discarding information that could be beneficial for further classification or regression analyses based on these similarities. We propose to learn a graph similarity metric using a siamese graph convolutional neural network (s-GCN) in a supervised setting. The proposed framework takes into consideration the graph structure for the evaluation of similarity between a pair of graphs, by employing spectral graph convolutions that allow the generalisation of traditional convolutions to irregular graphs and operates in the graph spectral domain. We apply the proposed model on two datasets: the challenging ABIDE database, which comprises functional MRI data of 403 patients with autism spectrum disorder (ASD) and 468 healthy controls aggregated from multiple acquisition sites, and a set of 2500 subjects from UK Biobank. We demonstrate the performance of the method for the tasks of classification between matching and non-matching graphs, as well as individual subject classification and manifold learning, showing that it leads to significantly improved results compared to traditional methods.

Journal article

Pawlowski N, Ktena SI, Lee MCH, Kainz B, Rueckert D, Glocker B, Rajchl Met al., 2017, DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images

We present DLTK, a toolkit providing baseline implementations for efficientexperimentation with deep learning methods on biomedical images. It builds ontop of TensorFlow and its high modularity and easy-to-use examples allow for alow-threshold access to state-of-the-art implementations for typical medicalimaging problems. A comparison of DLTK's reference implementations of popularnetwork architectures for image segmentation demonstrates new top performanceon the publicly available challenge data "Multi-Atlas Labeling Beyond theCranial Vault". The average test Dice similarity coefficient of $81.5$ exceedsthe previously best performing CNN ($75.7$) and the accuracy of the challengewinning method ($79.0$).

Working paper

Pawlowski N, Brock A, Lee MCH, Rajchl M, Glocker Bet al., 2017, Implicit Weight Uncertainty in Neural Networks

Modern neural networks tend to be overconfident on unseen, noisy orincorrectly labelled data and do not produce meaningful uncertainty measures.Bayesian deep learning aims to address this shortcoming with variationalapproximations (such as Bayes by Backprop or Multiplicative Normalising Flows).However, current approaches have limitations regarding flexibility andscalability. We introduce Bayes by Hypernet (BbH), a new method of variationalapproximation that interprets hypernetworks as implicit distributions. Itnaturally uses neural networks to model arbitrarily complex distributions andscales to modern deep learning architectures. In our experiments, wedemonstrate that our method achieves competitive accuracies and predictiveuncertainties on MNIST and a CIFAR5 task, while being the most robust againstadversarial attacks.

Working paper

Bai W, Oktay O, Sinclair M, Suzuki H, Rajchl M, Tarroni G, Glocker B, King A, Matthews P, Rueckert Det al., 2017, Semi-supervised learning for network-based cardiac MR image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 253-260, ISSN: 0302-9743

Training a fully convolutional network for pixel-wise (or voxel-wise) image segmentation normally requires a large number of training images with corresponding ground truth label maps. However, it is a challenge to obtain such a large training set in the medical imaging domain, where expert annotations are time-consuming and difficult to obtain. In this paper, we propose a semi-supervised learning approach, in which a segmentation network is trained from both labelled and unlabelled data. The network parameters and the segmentations for the unlabelled data are alternately updated. We evaluate the method for short-axis cardiac MR image segmentation and it has demonstrated a high performance, outperforming a baseline supervised method. The mean Dice overlap metric is 0.92 for the left ventricular cavity, 0.85 for the myocardium and 0.89 for the right ventricular cavity. It also outperforms a state-of-the-art multi-atlas segmentation method by a large margin and the speed is substantially faster.

Conference paper

Ktena SI, Parisot S, Ferrante E, Rajchl M, Lee M, Glocker B, Rueckert Det al., 2017, Distance metric learning using graph convolutional networks: application to functional brain networks, Medical Image Computing and Computer Assisted Intervention - MICCAI 2017, Publisher: Springer, Pages: 469-477, ISSN: 0302-9743

Evaluating similarity between graphs is of major importance in several computer vision and pattern recognition problems, where graph representations are often used to model objects or interactions between elements. The choice of a distance or similarity metric is, however, not trivial and can be highly dependent on the application at hand. In this work, we propose a novel metric learning method to evaluate distance between graphs that leverages the power of convolutional neural networks, while exploiting concepts from spectral graph theory to allow these operations on irregular graphs. We demonstrate the potential of our method in the field of connectomics, where neuronal pathways or functional connections between brain regions are commonly modelled as graphs. In this problem, the definition of an appropriate graph similarity function is critical to unveil patterns of disruptions associated with certain brain disorders. Experimental results on the ABIDE dataset show that our method can learn a graph similarity metric tailored for a clinical application, improving the performance of a simple k-nn classifier by 11.9% compared to a traditional distance metric.

Conference paper

Alansary A, Rajchl M, McDonagh S, Murgasova M, Damodaram M, Lloyd DFA, Davidson A, Rutherford M, Hajnal JV, Rueckert D, Kainz Bet al., 2017, PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI, IEEE Transactions on Medical Imaging, Vol: 36, Pages: 2031-2044, ISSN: 1558-254X

In this paper we present a novel method for the correction of motion artifacts that are present in fetal Magnetic Resonance Imaging (MRI) scans of the whole uterus. Contrary to current slice-to-volume registration (SVR) methods, requiring an inflexible anatomical enclosure of a \emph{single} investigated organ, the proposed patch-to-volume reconstruction (PVR) approach is able to reconstruct a large field of view of non-rigidly deforming structures. It relaxes rigid motion assumptions by introducing a specific amount of redundant information that is exploited with parallelized patch-wise optimization, super-resolution, and automatic outlier rejection. We further describe and provide an efficient parallel implementation of PVR allowing its execution within reasonable time on commercially available graphics processing units (GPU), enabling its use in the clinical practice. We evaluate PVR's computational overhead compared to standard methods and observe improved reconstruction accuracy in the presence of affine motion artifacts compared to conventional SVR in synthetic experiments. Furthermore, we have evaluated our method qualitatively and quantitatively on real fetal MRI data subject to maternal breathing and sudden fetal movements. We evaluate peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), and cross correlation (CC) with respect to the originally acquired data and provide a method for visual inspection of reconstruction uncertainty. We further evaluate the distance error for selected anatomical landmarks in the fetal head, as well as calculating the mean and maximum displacements resulting from automatic non-rigid registration to a motion-free ground truth image. These experiments demonstrate a successful application of PVR motion compensation to the whole fetal body, uterus and placenta.

Journal article

Morant K, Mikami Y, Nevis I, McCarty D, Stirrat J, Scholl D, Rajchl M, Giannoccaro P, Kolman L, Heydari B, Lydell C, Howarth A, Grant A, White JAet al., 2017, Contribution of mitral valve leaflet length and septal wall thickness to outflow tract obstruction in patients with hypertrophic cardiomyopathy, INTERNATIONAL JOURNAL OF CARDIOVASCULAR IMAGING, Vol: 33, Pages: 1201-1211, ISSN: 1569-5794

Journal article

Baxter JSH, Rajchl M, McLeod AJ, Yuan J, Peters TMet al., 2017, Directed Acyclic Graph Continuous Max-Flow Image Segmentation for Unconstrained Label Orderings, INTERNATIONAL JOURNAL OF COMPUTER VISION, Vol: 123, Pages: 415-434, ISSN: 0920-5691

Journal article

Robinson E, Glocker B, Rajchl M, Rueckert Det al., 2016, Discrete Optimisation for Group-wise Cortical Surface Atlasing, International Workshop on Biomedical Image Registration, Publisher: IEEE, ISSN: 2160-7516

This paper presents a novel method for cortical surfaceatlasing. Group-wise registration is performed through adiscrete optimisation framework that seeks to simultaneouslyimprove pairwise correspondences between surfacefeature sets, whilst minimising a global cost relating to therank of the feature matrix. It is assumed that when fullyaligned, features will be highly linearly correlated, andthus have low rank. The framework is regularised throughuse of multi-resolution control point grids and higher-ordersmoothness terms, calculated by considering deformationstrain for displacements of triplets of points. Accordinglythe discrete framework is solved through high-order cliquereduction. The framework is tested on cortical foldingbased alignment, using data from the Human ConnectomeProject. Preliminary results indicate that group-wise alignmentimproves folding correspondences, relative to registrationbetween all pair-wise combinations, and registrationto a global average template.

Conference paper

Rajchl M, Lee MCH, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W, Damodaram M, Rutherford MA, Hajnal JV, Kainz B, Rueckert Det al., 2016, DeepCut: object segmentation from bounding box annotations using convolutional neural networks, IEEE Transactions on Medical Imaging, Vol: 36, Pages: 674-683, ISSN: 0278-0062

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled weak annotations, in our case bounding boxes. It extends the approach of the well-known GrabCut[1] method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naïve approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

Journal article

Kolman L, Welsh DG, Vigmond E, Joncas SX, Stirrat J, Scholl D, Rajchl M, Tweedie E, Mikami Y, Lydell C, Howarth A, Yee R, White JAet al., 2016, Abnormal Lymphatic Channels Detected by T2-Weighted MR Imaging as a Substrate for Ventricular Arrhythmia in HCM, JACC-CARDIOVASCULAR IMAGING, Vol: 9, Pages: 1354-1356, ISSN: 1936-878X

Journal article

Alansary A, Kamnitsas K, Davidson A, Khlebnikov R, Rajchl M, Malamateniou C, Rutherford M, Hajnal JV, Glocker B, Rueckert D, Kainz Bet al., 2016, Fast Fully Automatic Segmentation of the Human Placenta from Motion Corrupted MRI, 19th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI 2016), Publisher: Springer Verlag, ISSN: 0302-9743

Conference paper

Oktay O, Bai W, Guerrero R, Rajchl M, de Marvao A, O Regan D, Cook S, Heinrich M, Glocker B, Rueckert Det al., 2016, Stratified decision forests for accurate anatomical landmark localization, IEEE Transactions on Medical Imaging, Vol: 36, Pages: 332-342, ISSN: 0278-0062

Accurate localization of anatomical landmarks is an important step in medical imaging, as it provides useful prior information for subsequent image analysis and acquisition methods. It is particularly useful for initialization of automatic image analysis tools (e.g. segmentation and registration) and detection of scan planes for automated image acquisition. Landmark localization has been commonly performed using learning based approaches, such as classifier and/or regressor models. However, trained models may not generalize well in heterogeneous datasets when the images contain large differences due to size, pose and shape variations of organs. To learn more data-adaptive and patient specific models, we propose a novel stratification based training model, and demonstrate its use in a decision forest. The proposed approach does not require any additional training information compared to the standard model training procedure and can be easily integrated into any decision tree framework. The proposed method is evaluated on 1080 3D highresolution and 90 multi-stack 2D cardiac cine MR images. The experiments show that the proposed method achieves state-of-theart landmark localization accuracy and outperforms standard regression and classification based approaches. Additionally, the proposed method is used in a multi-atlas segmentation to create a fully automatic segmentation pipeline, and the results show that it achieves state-of-the-art segmentation accuracy.

Journal article

Baxter JS, Rajchl M, Peters TM, Chen ECet al., 2016, Optimization-based interactive segmentation interface for multiregion problems., Journal of Medical Imaging, Vol: 3, ISSN: 2329-4302

Interactive segmentation is becoming of increasing interest to the medical imaging community in that it combines the positive aspects of both manual and automated segmentation. However, general-purpose tools have been lacking in terms of segmenting multiple regions simultaneously with a high degree of coupling between groups of labels. Hierarchical max-flow segmentation has taken advantage of this coupling for individual applications, but until recently, these algorithms were constrained to a particular hierarchy and could not be considered general-purpose. In a generalized form, the hierarchy for any given segmentation problem is specified in run-time, allowing different hierarchies to be quickly explored. We present an interactive segmentation interface, which uses generalized hierarchical max-flow for optimization-based multiregion segmentation guided by user-defined seeds. Applications in cardiac and neonatal brain segmentation are given as example applications of its generality.

Journal article

Rajchl M, Lee M, Schrans F, Davidson A, Passerat-Palmbach J, Tarroni G, Alansary A, Oktay O, Kainz B, Rueckert Det al., 2016, Learning under Distributed Weak Supervision

The availability of training data for supervision is a frequently encountered bottleneck of medical image analysis methods. While typically established by a clinical expert rater, the increase in acquired imaging data renders traditional pixel-wise segmentations less feasible. In this paper, we examine the use of a crowdsourcing platform for the distribution of super-pixel weak annotation tasks and collect such annotations from a crowd of non-expert raters. The crowd annotations are subsequently used for training a fully convolutional neural network to address the problem of fetal brain segmentation in T2-weighted MR images. Using this approach we report encouraging results compared to highly targeted, fully supervised methods and potentially address a frequent problem impeding image analysis research.

Working paper

Rajchl M, Lee M, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W, Kainz B, Rueckert Det al., 2016, DeepCut: object segmentation from bounding box annotations using convolutional neural networks, Publisher: arXiv

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled with bounding box annotations. It extends the approach of the well-known GrabCut method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

Working paper

Bai W, Peressutti D, Parisot S, Oktay O, Rajchl M, O Regan D, Cook S, King A, Rueckert Det al., 2016, Beyond the AHA 17-segment model: Motion-driven parcellation of the left ventricle, 6th International Workshop, STACOM 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, October 9, 2015, Publisher: Springer, Pages: 13-20, ISSN: 0302-9743

A major challenge for cardiac motion analysis is the highdimensionality of the motion data. Conventionally, the AHA model is used for dimensionality reduction, which divides the left ventricle into 17 segments using criteria based on anatomical structures. In this paper, a novel method is proposed to divide the left ventricle into homogeneous parcels in terms of motion trajectories. We demonstrate that the motion-driven parcellation has good reproducibility and use it for data reduction and motion description on a dataset of 1093 subjects. The resulting motion descriptor achieves high performance on two exemplar applications, namely gender and age predictions. The proposed method has the potential to be applied to groupwise motion analysis.

Conference paper

Rajchl M, Baxter JSH, Qiu W, Khan AR, Fenster A, Peters TM, Rueckert D, Yuan Jet al., 2016, Fast Deformable Image Registration with Non-Smooth Dual Optimization, 29th IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Publisher: IEEE, Pages: 465-472, ISSN: 2160-7508

Conference paper

Mendrik AM, Vincken KI, Kuijf HJ, Breeuwer M, Bouvy WH, de Bresser J, Alansary A, de Briujne M, Carass A, El-Baz A, Jog A, Katyal R, Khan AR, van der Lijn F, Mahmood Q, Mukheriee R, van Opbroek A, Paneri S, Pereira S, Persson M, Rajchl M, Sarikaya D, Smedby O, Silva CA, Vrooman HA, Vyas S, Wang C, Zhao L, Biessels GJ, Viergever MAet al., 2015, MRBrainS Challenge: Online Evaluation Framework for Brain Image Segmentation in 3T MRI Scans, Computational Intelligence and Neuroscience, Vol: 2015, ISSN: 1687-5273

Many methods have been proposed for tissue segmentation in brain MRI scans. The multitude of methods proposed complicates the choice of one method above others. We have therefore established the MRBrainS online evaluation framework for evaluating (semi)automatic algorithms that segment gray matter (GM), white matter (WM), and cerebrospinal fluid (CSF) on 3T brain MRI scans of elderly subjects (65–80 y). Participants apply their algorithms to the provided data, after which their results are evaluated and ranked. Full manual segmentations of GM, WM, and CSF are available for all scans and used as the reference standard. Five datasets are provided for training and fifteen for testing. The evaluated methods are ranked based on their overall performance to segment GM, WM, and CSF and evaluated using three evaluation metrics (Dice, H95, and AVD) and the results are published on the MRBrainS13 website. We present the results of eleven segmentation algorithms that participated in the MRBrainS13 challenge workshop at MICCAI, where the framework was launched, and three commonly used freeware packages: FreeSurfer, FSL, and SPM. The MRBrainS evaluation framework provides an objective and direct comparison of all evaluated algorithms and can aid in selecting the best performing method for the segmentation goal at hand.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00786217&limit=30&person=true