Publications from our Researchers
Several of our current PhD candidates and fellow researchers at the Data Science Institute have published, or in the proccess of publishing, papers to present their research.
- Showing results for:
- Reset all filters
Conference paperDuan J, Schlemper J, Bai W, et al., 2018,
Combining deep learning and shape priors for bi-ventricular segmentation of volumetric cardiac magnetic resonance images, MICCAI ShapeMI Workshop, Publisher: Springer Verlag, Pages: 258-267, ISSN: 0302-9743
In this paper, we combine a network-based method with image registration to develop a shape-based bi-ventricular segmentation tool for short-axis cardiac magnetic resonance (CMR) volumetric images. The method first employs a fully convolutional network (FCN) to learn the segmentation task from manually labelled ground truth CMR volumes. However, due to the presence of image artefacts in the training dataset, the resulting FCN segmentation results are often imperfect. As such, we propose a second step to refine the FCN segmentation. This step involves performing a non-rigid registration with multiple high-resolution bi-ventricular atlases, allowing the explicit shape priors to be inferred. We validate the proposed approach on 1831 healthy subjects and 200 subjects with pulmonary hypertension. Numerical experiments on the two datasets demonstrate that our approach is capable of producing accurate, high-resolution and anatomically smooth bi-ventricular models, despite the artefacts in the input CMR volumes.
Conference paperKermani NZ, Pavlidis S, Saqi M, et al., 2018,
Further resolution of non-T2 asthma subtypes from high-throughput sputum transciptomics data in U-BIOPRED, 28th International Congress of the European-Respiratory-Society (ERS), Publisher: European Respiratory Society, Pages: 1-3, ISSN: 0903-1936
Background: Precision medicine of asthma requires understanding of its heterogeneity and molecular pathophysiology.Aim: Three sputum-derived transcriptomic clusters (TACs) were previously identified [Kuo at al. Eur Respir J.2017, 49] in the U-BIOPRED cohort: TAC1 consisting of T2 high patients with eosinophilia, TAC2 with neutrophilia and inflammasome activation and TAC3, a more heterogeneous cluster with mostly paucigranulocytic patients. We further refine TAC3.Methods: Gaussian mixture modelling for model-based clustering was applied to sputum gene expression of 104 asthmatic participants from the adult cohort to substructure TAC3. Gene set variation analysis (GSVA) was used to explore the enrichment of gene signatures across the TACs.Results: We again produce the three TACs (TAC1 N=23, TAC2 N=24) but TAC3 was further split into two groups (TAC3a N=28, TAC3b N=29), distinguished by distinct neutrophils and macrophages density and enrichment of IL13 stimulation, inflammasome activation and OXPHOS gene signatures (Figure), as well as IL-4 and LPS-stimulated macrophage gene signatures. However, there were no distinguishing clinical features.Conclusion: Identification of sub-structure of sputum TACs, particularly of TAC3, will help towards improved targeted therapies.
Journal articleBrandsma J, Goss VM, Yang X, et al., 2018,
BackgroundLung epithelial lining fluid (ELF)—sampled through sputum induction—is a medium rich in cells, proteins and lipids. However, despite its key role in maintaining lung function, homeostasis and defences, the composition and biology of ELF, especially in respect of lipids, remain incompletely understood.ObjectivesTo characterise the induced sputum lipidome of healthy adult individuals, and to examine associations between different ELF lipid phenotypes and the demographic characteristics within the study cohort.MethodsInduced sputum samples were obtained from 41 healthy non-smoking adults, and their lipid compositions analysed using a combination of untargeted shotgun and liquid chromatography mass spectrometry methods. Topological data analysis (TDA) was used to group subjects with comparable sputum lipidomes in order to identify distinct ELF phenotypes.ResultsThe induced sputum lipidome was diverse, comprising a range of different molecular classes, including at least 75 glycerophospholipids, 13 sphingolipids, 5 sterol lipids and 12 neutral glycerolipids. TDA identified two distinct phenotypes differentiated by a higher total lipid content and specific enrichments of diacyl-glycerophosphocholines, -inositols and -glycerols in one group, with enrichments of sterols, glycolipids and sphingolipids in the other. Subjects presenting the lipid-rich ELF phenotype also had significantly higher BMI, but did not differ in respect of other demographic characteristics such as age or gender.ConclusionsWe provide the first evidence that the ELF lipidome varies significantly between healthy individuals and propose that such differences are related to weight status, highlighting the potential impact of (over)nutrition on lung lipid metabolism.
Journal articleDolan D, Jensen H, Martinez Mediano P, et al., 2018,
The improvisational state of mind: a multidisciplinary study of an improvisatory approach to classical music repertoire performance, Frontiers in Psychology, Vol: 9, ISSN: 1664-1078
The recent re-introduction of improvisation as a professional practice within classical music, however cautious and still rare, allows direct and detailed contemporary comparison between improvised and “standard” approaches to performances of the same composition, comparisons which hitherto could only be inferred from impressionistic historical accounts. This study takes an interdisciplinary multi-method approach to discovering the contrasting nature and effects of prepared and improvised approaches during live chamber-music concert performances of a movement from Franz Schubert’s “Shepherd on the Rock”, given by a professional trio consisting of voice, flute, and piano, in the presence of an invited audience of 22 adults with varying levels of musical experience and training. The improvised performances were found to be differ systematically from prepared performances in their timing, dynamic, and timbral features as well as in the degree of risk-taking and “mind reading” between performers including during moments of added extemporised notes. Post-performance critical reflection by the performers characterised distinct mental states underlying the two modes of performance. The amount of overall body movements was reduced in the improvised performances, which showed less unco-ordinated movements between performers when compared to the prepared performance. Audience members, who were told only that the two performances would be different, but not how, rated the improvised version as more emotionally compelling and musically convincing than the prepared version. The size of this effect was not affected by whether or not the audience could see the performers, or by levels of musical training. EEG measurements from 19 scalp locations showed higher levels of Lempel-Ziv complexity (associated with awareness and alertness) in the improvised version in both performers and audience. Results are discussed in terms of their potential
Conference paperAlansary A, Le Folgoc L, Vaillant G, et al., 2018,
We propose a fully automatic method to find standardizedview planes in 3D image acquisitions. Standard view images are impor-tant in clinical practice as they provide a means to perform biometricmeasurements from similar anatomical regions. These views are often constrained to the native orientation of a 3D image acquisition. Navigating through target anatomy to find the required view plane is tedious and operator-dependent. For this task, we employ a multi-scale reinforcement learning (RL) agent framework and extensively evaluate several DeepQ-Network (DQN) based strategies. RL enables a natural learning paradigm by interaction with the environment, which can be used to mimic experienced operators. We evaluate our results using the distance between the anatomical landmarks and detected planes, and the angles between their normal vector and target. The proposed algorithm is assessed on the mid-sagittal and anterior-posterior commissure planes of brain MRI, and the 4-chamber long-axis plane commonly used in cardiac MRI, achieving accuracy of 1.53mm, 1.98mm and 4.84mm, respectively.
Conference paperTarroni G, Oktay O, Sinclair M, et al., 2018,
A comprehensive approach for learning-based fully-automated inter-slice motion correction for short-axis cine cardiac MR image stacks, 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) / 8th Eurographics Workshop on Visual Computing for Biology and Medicine (VCBM), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 268-276, ISSN: 0302-9743
In the clinical routine, short axis (SA) cine cardiac MR (CMR) image stacks are acquired during multiple subsequent breath-holds. If the patient cannot consistently hold the breath at the same position, the acquired image stack will be affected by inter-slice respiratory motion and will not correctly represent the cardiac volume, introducing potential errors in the following analyses and visualisations. We propose an approach to automatically correct inter-slice respiratory motion in SA CMR image stacks. Our approach makes use of probabilistic segmentation maps (PSMs) of the left ventricular (LV) cavity generated with decision forests. PSMs are generated for each slice of the SA stack and rigidly registered in-plane to a target PSM. If long axis (LA) images are available, PSMs are generated for them and combined to create the target PSM; if not, the target PSM is produced from the same stack using a 3D model trained from motion-free stacks. The proposed approach was tested on a dataset of SA stacks acquired from 24 healthy subjects (for which anatomical 3D cardiac images were also available as reference) and compared to two techniques which use LA intensity images and LA segmentations as targets, respectively. The results show the accuracy and robustness of the proposed approach in motion compensation.
Conference paperQin C, Bai W, Schlemper J, et al., 2018,
© 2018, Springer Nature Switzerland AG. Accelerating the acquisition of magnetic resonance imaging (MRI) is a challenging problem, and many works have been proposed to reconstruct images from undersampled k-space data. However, if the main purpose is to extract certain quantitative measures from the images, perfect reconstructions may not always be necessary as long as the images enable the means of extracting the clinically relevant measures. In this paper, we work on jointly predicting cardiac motion estimation and segmentation directly from undersampled data, which are two important steps in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In particular, a unified model consisting of both motion estimation branch and segmentation branch is learned by optimising the two tasks simultaneously. Additional corresponding fully-sampled images are incorporated into the network as a parallel sub-network to enhance and guide the learning during the training process. Experimental results using cardiac MR images from 220 subjects show that the proposed model is robust to undersampled data and is capable of predicting results that are close to that from fully-sampled ones, while bypassing the usual image reconstruction stage.
Conference paperSchlemper J, Castro DC, Bai W, et al., 2018,
Recently, many deep learning (DL) based MR image reconstruction methods have been proposed with promising results. However, only a handful of work has been focussing on characterising the behaviour of deep networks, such as investigating when the networks may fail to reconstruct. In this work, we explore the applicability of Bayesian DL techniques to model the uncertainty associated with DL-based reconstructions. In particular, we apply MC-dropout and heteroscedastic loss to the reconstruction networks to model epistemic and aleatoric uncertainty. We show that the proposed Bayesian methods achieve competitive performance when the test images are relatively far from the training data distribution and outperforms when the baseline method is over-parametrised. In addition, we qualitatively show that there seems to be a correlation between the magnitude of the produced uncertainty maps and the error maps, demonstrating the potential utility of the Bayesian DL methods for assessing the reliability of the reconstructed images.
Conference paperBiffi C, Oktay O, Tarroni G, et al., 2018,
Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling, International Conference On Medical Image Computing & Computer Assisted Intervention, Publisher: Springer, Pages: 464-471, ISSN: 0302-9743
Alterations in the geometry and function of the heart define well-established causes of cardiovascular disease. However, current approaches to the diagnosis of cardiovascular diseases often rely on subjective human assessment as well as manual analysis of medical images. Both factors limit the sensitivity in quantifying complex structural and functional phenotypes. Deep learning approaches have recently achieved success for tasks such as classification or segmentation of medical images, but lack interpretability in the feature extraction and decision processes, limiting their value in clinical diagnosis. In this work, we propose a 3D convolutional generative model for automatic classification of images from patients with cardiac diseases associated with structural remodeling. The model leverages interpretable task-specific anatomic patterns learned from 3D segmentations. It further allows to visualise and quantify the learned pathology-specific remodeling patterns in the original input space of the images. This approach yields high accuracy in the categorization of healthy and hypertrophic cardiomyopathy subjects when tested on unseen MR images from our own multi-centre dataset (100%) as well on the ACDC MICCAI 2017 dataset (90%). We believe that the proposed deep learning approach is a promising step towards the development of interpretable classifiers for the medical imaging domain, which may help clinicians to improve diagnostic accuracy and enhance patient risk-stratification.
Conference paperQin C, Bai W, Schlemper J, et al., 2018,
Cardiac motion estimation and segmentation play important roles in quantitatively assessing cardiac function and diagnosing cardiovascular diseases. In this paper, we propose a novel deep learning method for joint estimation of motion and segmentation from cardiac MR image sequences. The proposed network consists of two branches: a cardiac motion estimation branch which is built on a novel unsupervised Siamese style recurrent spatial transformer network, and a cardiac segmentation branch that is based on a fully convolutional network. In particular, a joint multi-scale feature encoder is learned by optimizing the segmentation branch and the motion estimation branch simultaneously. This enables the weakly-supervised segmentation by taking advantage of features that are unsupervisedly learned in the motion estimation branch from a large amount of unannotated data. Experimental results using cardiac MlRI images from 220 subjects show that the joint learning of both tasks is complementary and the proposed models outperform the competing methods significantly in terms of accuracy and speed.
Conference paperSchlemper J, Oktay O, Bai W, et al., 2018,
© Springer Nature Switzerland AG 2018. Reconstructing magnetic resonance imaging (MRI) from undersampled k-space enables the accelerated acquisition of MRI but is a challenging problem. However, in many diagnostic scenarios, perfect reconstructions are not necessary as long as the images allow clinical practitioners to extract clinically relevant parameters. In this work, we present a novel deep learning framework for reconstructing such clinical parameters directly from undersampled data, expanding on the idea of application-driven MRI. We propose two deep architectures, an end-to-end synthesis network and a latent feature interpolation network, to predict cardiac segmentation maps from extremely undersampled dynamic MRI data, bypassing the usual image reconstruction stage altogether. We perform a large-scale simulation study using UK Biobank data containing nearly 1000 test subjects and show that with the proposed approaches, an accurate estimate of clinical parameters such as ejection fraction can be obtained from fewer than 10 k-space lines per time-frame.
Conference paperBai W, Suzuki H, Qin C, et al., 2018,
Segmentation of image sequences is an important task in medical image analysis, which enables clinicians to assess the anatomy and function of moving organs. However, direct application of a segmentation algorithm to each time frame of a sequence may ignore the temporal continuity inherent in the sequence. In this work, we propose an image sequence segmentation algorithm by combining a fully convolutional network with a recurrent neural network, which incorporates both spatial and temporal information into the segmentation task. A key challenge in training this network is that the available manual annotations are temporally sparse, which forbids end-to-end training. We address this challenge by performing non-rigid label propagation on the annotations and introducing an exponentially weighted loss function for training. Experiments on aortic MR image sequences demonstrate that the proposed method significantly improves both accuracy and temporal smoothness of segmentation, compared to a baseline method that utilises spatial information only. It achieves an average Dice metric of 0.960 for the ascending aorta and 0.953 for the descending aorta.
Journal articleBai W, Sinclair M, Tarroni G, et al., 2018,
Automated cardiovascular magnetic resonance image analysis with fully convolutional networks
Cardiovascular magnetic resonance (CMR) imaging is a standard imagingmodality for assessing cardiovascular diseases (CVDs), the leading cause ofdeath globally. CMR enables accurate quantification of the cardiac chambervolume, ejection fraction and myocardial mass, providing information fordiagnosis and monitoring of CVDs. However, for years, clinicians have beenrelying on manual approaches for CMR image analysis, which is time consumingand prone to subjective errors. It is a major clinical challenge toautomatically derive quantitative and clinically relevant information from CMRimages. Deep neural networks have shown a great potential in image patternrecognition and segmentation for a variety of tasks. Here we demonstrate anautomated analysis method for CMR images, which is based on a fullyconvolutional network (FCN). The network is trained and evaluated on alarge-scale dataset from the UK Biobank, consisting of 4,875 subjects with93,500 pixelwise annotated images. The performance of the method has beenevaluated using a number of technical metrics, including the Dice metric, meancontour distance and Hausdorff distance, as well as clinically relevantmeasures, including left ventricle (LV) end-diastolic volume (LVEDV) andend-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolicvolume (RVEDV) and end-systolic volume (RVESV). By combining FCN with alarge-scale annotated dataset, the proposed automated method achieves a highperformance on par with human experts in segmenting the LV and RV on short-axisCMR images and the left atrium (LA) and right atrium (RA) on long-axis CMRimages.
Conference paperRobinson R, Oktay O, Bai W, et al., 2018,
Recent advances in deep learning based image segmentationmethods have enabled real-time performance with human-level accuracy.However, occasionally even the best method fails due to low image qual-ity, artifacts or unexpected behaviour of black box algorithms. Beingable to predict segmentation quality in the absence of ground truth is ofparamount importance in clinical practice, but also in large-scale studiesto avoid the inclusion of invalid data in subsequent analysis.In this work, we propose two approaches of real-time automated qualitycontrol for cardiovascular MR segmentations using deep learning. First,we train a neural network on 12,880 samples to predict Dice SimilarityCoefficients (DSC) on a per-case basis. We report a mean average error(MAE) of 0.03 on 1,610 test samples and 97% binary classification accu-racy for separating low and high quality segmentations. Secondly, in thescenario where no manually annotated data is available, we train a net-work to predict DSC scores from estimated quality obtained via a reversetesting strategy. We report an MAE = 0.14 and 91% binary classifica-tion accuracy for this case. Predictions are obtained in real-time which,when combined with real-time segmentation methods, enables instantfeedback on whether an acquired scan is analysable while the patient isstill in the scanner. This further enables new applications of optimisingimage acquisition towards best possible analysis results.
Conference paperDuan J, Schlemper J, Bai W, et al., 2018,
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.