Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Wu F, Li L, Yang G, Wong T, Mohiaddin R, Firmin D, Keegan J, Xu L, Zhuang Xet al., 2018,

    Atrial Fibrosis Quantification Based on Maximum Likelihood Estimator of Multivariate Images

    , 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2018), Pages: 604-612, ISSN: 0302-9743

    © 2018, Springer Nature Switzerland AG. We present a fully-automated segmentation and quantification of the left atrial (LA) fibrosis and scars combining two cardiac MRIs, one is the target late gadolinium-enhanced (LGE) image, and the other is an anatomical MRI from the same acquisition session. We formulate the joint distribution of images using a multivariate mixture model (MvMM), and employ the maximum likelihood estimator (MLE) for texture classification of the images simultaneously. The MvMM can also embed transformations assigned to the images to correct the misregistration. The iterated conditional mode algorithm is adopted for optimization. This method first extracts the anatomical shape of the LA, and then estimates a prior probability map. It projects the resulting segmentation onto the LA surface, for quantification and analysis of scarring. We applied the proposed method to 36 clinical data sets and obtained promising results (Accuracy: 0.809±150, Dice: 0.556±187). We compared the method with the conventional algorithms and showed an evidently and statistically better performance (p < 0.03).

  • Conference paper
    Shi Z, Zeng G, Zhang L, Zhuang X, Li L, Yang G, Zheng Get al., 2018,

    Bayesian VoxDRN: A Probabilistic Deep Voxelwise Dilated Residual Network for Whole Heart Segmentation from 3D MR Images

    , 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2018), Pages: 569-577, ISSN: 0302-9743

    © 2018, Springer Nature Switzerland AG. In this paper, we propose a probabilistic deep voxelwise dilated residual network, referred as Bayesian VoxDRN, to segment the whole heart from 3D MR images. Bayesian VoxDRN can predict voxelwise class labels with a measure of model uncertainty, which is achieved by a dropout-based Monte Carlo sampling during testing to generate a posterior distribution of the voxel class labels. Our method has three compelling advantages. First, the dropout mechanism encourages the model to learn a distribution of weights with better data-explanation ability and prevents over-fitting. Second, focal loss and Dice loss are well encapsulated into a complementary learning objective to segment both hard and easy classes. Third, an iterative switch training strategy is introduced to alternatively optimize a binary segmentation task and a multi-class segmentation task for a further accuracy improvement. Experiments on the MICCAI 2017 multi-modality whole heart segmentation challenge data corroborate the effectiveness of the proposed method.

  • Conference paper
    Mo Y, Liu F, McIlwraith D, Yang G, Zhang J, He T, Guo Yet al., 2018,

    The Deep Poincaré Map: A Novel Approach for Left Ventricle Segmentation

    , 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2018), Pages: 561-568, ISSN: 0302-9743

    © 2018, Springer Nature Switzerland AG. Precise segmentation of the left ventricle (LV) within cardiac MRI images is a prerequisite for the quantitative measurement of heart function. However, this task is challenging due to the limited availability of labeled data and motion artifacts from cardiac imaging. In this work, we present an iterative segmentation algorithm for LV delineation. By coupling deep learning with a novel dynamic-based labeling scheme, we present a new methodology where a policy model is learned to guide an agent to travel over the image, tracing out a boundary of the ROI – using the magnitude difference of the Poincaré map as a stopping criterion. Our method is evaluated on two datasets, namely the Sunnybrook Cardiac Dataset (SCD) and data from the STACOM 2011 LV segmentation challenge. Our method outperforms the previous research over many metrics. In order to demonstrate the transferability of our method we present encouraging results over the STACOM 2011 data, when using a model trained on the SCD dataset.

  • Conference paper
    Seitzer M, Yang G, Schlemper J, Oktay O, Würfl T, Christlein V, Wong T, Mohiaddin R, Firmin D, Keegan J, Rueckert D, Maier Aet al., 2018,

    Adversarial and perceptual refinement for compressed sensing MRI reconstruction

    , 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI2018), Pages: 232-240, ISSN: 0302-9743

    © Springer Nature Switzerland AG 2018. Deep learning approaches have shown promising performance for compressed sensing-based Magnetic Resonance Imaging. While deep neural networks trained with mean squared error (MSE) loss functions can achieve high peak signal to noise ratio, the reconstructed images are often blurry and lack sharp details, especially for higher undersampling rates. Recently, adversarial and perceptual loss functions have been shown to achieve more visually appealing results. However, it remains an open question how to (1) optimally combine these loss functions with the MSE loss function and (2) evaluate such a perceptual enhancement. In this work, we propose a hybrid method, in which a visual refinement component is learnt on top of an MSE loss-based reconstruction network. In addition, we introduce a semantic interpretability score, measuring the visibility of the region of interest in both ground truth and reconstructed images, which allows us to objectively quantify the usefulness of the image quality for image post-processing and analysis. Applied on a large cardiac MRI dataset simulated with 8-fold undersampling, we demonstrate significant improvements (p<0.01) over the state-of-the-art in both a human observer study and the semantic interpretability score.

  • Conference paper
    Mo Y, Liu F, McIlwraith D, Yang G, Zhang J, He T, Guo Yet al., 2018,

    The deep Poincare map: A novel approach for left ventricle segmentation

    , 21st International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 561-568, ISSN: 0302-9743

    Precise segmentation of the left ventricle (LV) within cardiac MRI images is a prerequisite for the quantitative measurement of heart function. However, this task is challenging due to the limited availability of labeled data and motion artifacts from cardiac imaging. In this work, we present an iterative segmentation algorithm for LV delineation. By coupling deep learning with a novel dynamic-based labeling scheme, we present a new methodology where a policy model is learned to guide an agent to travel over the image, tracing out a boundary of the ROI – using the magnitude difference of the Poincaré map as a stopping criterion. Our method is evaluated on two datasets, namely the Sunnybrook Cardiac Dataset (SCD) and data from the STACOM 2011 LV segmentation challenge. Our method outperforms the previous research over many metrics. In order to demonstrate the transferability of our method we present encouraging results over the STACOM 2011 data, when using a model trained on the SCD dataset.

  • Journal article
    Reynolds HM, Parameswaran BK, Finnegan ME, Roettger D, Lau E, Kron T, Shaw M, Chander S, Siva Set al., 2018,

    Diffusion weighted and dynamic contrast enhanced MRI as an imaging biomarker for stereotactic ablative body radiotherapy (SABR) of primary renal cell carcinoma

    , PLOS ONE, Vol: 13, Pages: e0202387-e0202387
  • Journal article
    Yang G, Yu S, Hao D, Slabaugh G, Dragotti PL, Ye X, Liu F, Arridge S, Keegan J, Guo Y, Firmin Det al., 2018,

    DAGAN: deep de-aliasing generative adversarial networks for fast compressed sensing MRI reconstruction

    , IEEE Transactions on Medical Imaging, Vol: 37, Pages: 1310-1321, ISSN: 0278-0062

    Compressed Sensing Magnetic Resonance Imaging (CS-MRI) enables fast acquisition, which is highly desirable for numerous clinical applications. This can not only reduce the scanning cost and ease patient burden, but also potentially reduce motion artefacts and the effect of contrast washout, thus yielding better image quality. Different from parallel imaging based fast MRI, which utilises multiple coils to simultaneously receive MR signals, CS-MRI breaks the Nyquist-Shannon sampling barrier to reconstruct MRI images with much less required raw data. This paper provides a deep learning based strategy for reconstruction of CS-MRI, and bridges a substantial gap between conventional non-learning methods working only on data from a single image, and prior knowledge from large training datasets. In particular, a novel conditional Generative Adversarial Networks-based model (DAGAN) is proposed to reconstruct CS-MRI. In our DAGAN architecture, we have designed a refinement learning method to stabilise our U-Net based generator, which provides an endto-end network to reduce aliasing artefacts. To better preserve texture and edges in the reconstruction, we have coupled the adversarial loss with an innovative content loss. In addition, we incorporate frequency domain information to enforce similarity in both the image and frequency domains. We have performed comprehensive comparison studies with both conventional CSMRI reconstruction methods and newly investigated deep learning approaches. Compared to these methods, our DAGAN method provides superior reconstruction with preserved perceptual image details. Furthermore, each image is reconstructed in about 5 ms, which is suitable for real-time processing.

  • Journal article
    Reynolds HM, Parameswaran B, Roettger D, Finnegan M, Lau E, Kron T, Shaw M, Chander S, Siva Set al., 2018,

    Assessing DCE-MRI and DWI as treatment response biomarkers after SABR for primary renal cell carcinoma

    , JOURNAL OF CLINICAL ONCOLOGY, Vol: 36, ISSN: 0732-183X
  • Journal article
    Yang G, Zhuang X, Khan H, Haldar S, Nyktari E, Li L, Wage R, Ye X, Slabaugh G, Mohiaddin R, Wong T, Keegan J, Firmin Det al., 2018,

    Fully automatic segmentation and objective assessment of atrial scars for long-standing persistent atrial fibrillation patients using late gadolinium-enhanced MRI.

    , Med Phys, Vol: 45, Pages: 1562-1576

    PURPOSE: Atrial fibrillation (AF) is the most common heart rhythm disorder and causes considerable morbidity and mortality, resulting in a large public health burden that is increasing as the population ages. It is associated with atrial fibrosis, the amount and distribution of which can be used to stratify patients and to guide subsequent electrophysiology ablation treatment. Atrial fibrosis may be assessed noninvasively using late gadolinium-enhanced (LGE) magnetic resonance imaging (MRI) where scar tissue is visualized as a region of signal enhancement. However, manual segmentation of the heart chambers and of the atrial scar tissue is time consuming and subject to interoperator variability, particularly as image quality in AF is often poor. In this study, we propose a novel fully automatic pipeline to achieve accurate and objective segmentation of the heart (from MRI Roadmap data) and of scar tissue within the heart (from LGE MRI data) acquired in patients with AF. METHODS: Our fully automatic pipeline uniquely combines: (a) a multiatlas-based whole heart segmentation (MA-WHS) to determine the cardiac anatomy from an MRI Roadmap acquisition which is then mapped to LGE MRI, and (b) a super-pixel and supervised learning based approach to delineate the distribution and extent of atrial scarring in LGE MRI. We compared the accuracy of the automatic analysis to manual ground truth segmentations in 37 patients with persistent long-standing AF. RESULTS: Both our MA-WHS and atrial scarring segmentations showed accurate delineations of cardiac anatomy (mean Dice = 89%) and atrial scarring (mean Dice = 79%), respectively, compared to the established ground truth from manual segmentation. In addition, compared to the ground truth, we obtained 88% segmentation accuracy, with 90% sensitivity and 79% specificity. Receiver operating characteristic analysis achieved an average area under the curve of 0.91. CONCLUSION: Compared with previously studied methods with manual interve

  • Journal article
    Soltaninejad M, Yang G, Lambrou T, Allinson N, Jones T, Barrick T, Howe F, Ye Xet al., 2018,

    Supervised learning based multimodal MRI brain tumour segmentation using texture features from supervoxels

    , Computer Methods and Programs in Biomedicine, Vol: 157, Pages: 69-84, ISSN: 0169-2607

    Background:Accurate segmentation of brain tumour in magnetic resonance images (MRI) is a difficult task due to various tumour types. Using information and features from multimodal MRI including structural MRI and isotropic (p) and anisotropic (q) components derived from the diffusion tensor imaging (DTI) may result in a more accurate analysis of brain images.Methods:We propose a novel 3D supervoxel based learning method for segmentation of tumour in multimodal MRI brain images (conventional MRI and DTI). Supervoxels are generated using the information across the multimodal MRI dataset. For each supervoxel, a variety of features including histograms of texton descriptor, calculated using a set of Gabor filters with different sizes and orientations, and first order intensity statistical features are extracted. Those features are fed into a random forests (RF) classifier to classify each supervoxel into tumour core, oedema or healthy brain tissue.Results:The method is evaluated on two datasets: 1) Our clinical dataset: 11 multimodal images of patients and 2) BRATS 2013 clinical dataset: 30 multimodal images. For our clinical dataset, the average detection sensitivity of tumour (including tumour core and oedema) using multimodal MRI is 86% with balanced error rate (BER) 7%; while the Dice score for automatic tumour segmentation against ground truth is 0.84. The corresponding results of the BRATS 2013 dataset are 96%, 2% and 0.89, respectively.Conclusion:The method demonstrates promising results in the segmentation of brain tumour. Adding eatures from multimodal MRI images can largely increase the segmentation accuracy. The method provides a close match to expert delineation across all tumour grades, leading to a faster and more reproducible method of brain tumour detection and delineation to aid patient management.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1107&limit=10&page=3&respub-action=search.html Current Millis: 1711636491031 Current Time: Thu Mar 28 14:34:51 GMT 2024