Imperial College London

Miss Chen (Cherise) Chen

Faculty of EngineeringDepartment of Computing

Research Associate
 
 
 
//

Contact

 

chen.chen15 Website

 
 
//

Location

 

344Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

16 results found

Chen C, Hammernik K, Ouyang C, Qin C, Bai W, Rueckert Det al., 2021, Cooperative training and latent space data augmentation for robust medical image segmentation, International Conference on Medical Image Computing and Computer Assisted Intervention

Conference paper

Xiong Z, Xia Q, Hu Z, Huang N, Bian C, Zheng Y, Vesal S, Ravikumar N, Maier A, Yang X, Heng P-A, Ni D, Li C, Tong Q, Si W, Puybareau E, Khoudli Y, Geraud T, Chen C, Bai W, Rueckert D, Xu L, Zhuang X, Luo X, Jia S, Sermesant M, Liu Y, Wang K, Borra D, Masci A, Corsi C, de Vente C, Veta M, Karim R, Preetha CJ, Engelhardt S, Qiao M, Wang Y, Tao Q, Nunez-Garcia M, Camara O, Savioli N, Lamata P, Zhao Jet al., 2021, A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging, Medical Image Analysis, Vol: 67, Pages: 1-14, ISSN: 1361-8415

Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIs, currently the world's largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalitie

Journal article

Le EP, Evans NR, Tarkin JM, Chowdhury MM, Zaccagna F, Wall C, Huang Y, Weir-Mccall JR, Chen C, Warburton EA, Schonlieb CB, Sala E, Rudd JHFet al., 2020, Contrast CT classification of asymptomatic and symptomatic carotids in stroke and transient ischaemic attack with deep learning and interpretability, Publisher: OXFORD UNIV PRESS, Pages: 2418-2418, ISSN: 0195-668X

Conference paper

Qin C, Wang S, Chen C, Qiu H, Bai W, Rueckert Det al., 2020, Biomechanics-informed neural networks for myocardial motion tracking in MRI, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer International Publishing, Pages: 296-306, ISSN: 0302-9743

Image registration is an ill-posed inverse problem which often requires regularisation on the solution space. In contrast to most of the current approaches which impose explicit regularisation terms such as smoothness, in this paper we propose a novel method that can implicitly learn biomechanics-informed regularisation. Such an approach can incorporate application-specific prior knowledge into deep learning based registration. Particularly, the proposed biomechanics-informed regularisation leverages a variational autoencoder (VAE) to learn a manifold for biomechanically plausible deformations and to implicitly capture their underlying properties via reconstructing biomechanical simulations. The learnt VAE regulariser then can be coupled with any deep learning based registration network to regularise the solution space to be biomechanically plausible. The proposed method is validated in the context of myocardial motion tracking on 2D stacks of cardiac MRI data from two different datasets. The results show that it can achieve better performance against other competing methods in terms of motion tracking accuracy and has the ability to learn biomechanical properties such as incompressibility and strains. The method has also been shown to have better generalisability to unseen domains compared with commonly used L2 regularisation schemes.

Conference paper

Wang S, Tarroni G, Qin C, Mo Y, Dai C, Chen C, Glocker B, Guo Y, Rueckert D, Bai Wet al., 2020, Deep generative model-based quality control for cardiac MRI segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 88-97, ISSN: 0302-9743

In recent years, convolutional neural networks have demonstrated promising performance in a variety of medical image segmentation tasks. However, when a trained segmentation model is deployed into the real clinical world, the model may not perform optimally. A major challenge is the potential poor-quality segmentations generated due to degraded image quality or domain shift issues. There is a timely need to develop an automated quality control method that can detect poor segmentations and feedback to clinicians. Here we propose a novel deep generative model-based framework for quality control of cardiac MRI segmentation. It first learns a manifold of good-quality image-segmentation pairs using a generative model. The quality of a given test segmentation is then assessed by evaluating the difference from its projection onto the good-quality manifold. In particular, the projection is refined through iterative search in the latent space. The proposed method achieves high prediction accuracy on two publicly available cardiac MRI datasets. Moreover, it shows better generalisation ability than traditional regression-based methods. Our approach provides a real-time and model-agnostic quality control for cardiac MRI segmentation, which has the potential to be integrated into clinical image analysis workflows.

Conference paper

Chen C, Qin C, Qiu H, Ouyang C, Wang S, Chen L, Tarroni G, Bai W, Rueckert Det al., 2020, Realistic adversarial data augmentation for MR image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)

Neural network-based approaches can achieve high accuracy in various medicalimage segmentation tasks. However, they generally require large labelleddatasets for supervised learning. Acquiring and manually labelling a largemedical dataset is expensive and sometimes impractical due to data sharing andprivacy issues. In this work, we propose an adversarial data augmentationmethod for training neural networks for medical image segmentation. Instead ofgenerating pixel-wise adversarial attacks, our model generates plausible andrealistic signal corruptions, which models the intensity inhomogeneities causedby a common type of artefacts in MR imaging: bias field. The proposed methoddoes not rely on generative networks, and can be used as a plug-in module forgeneral segmentation networks in both supervised and semi-supervised learning.Using cardiac MR imaging we show that such an approach can improve thegeneralization ability and robustness of models as well as provide significantimprovements in low-data scenarios.

Conference paper

Chen C, Bai W, Davies R, Bhuva A, Manisty C, Moon J, Aung N, Lee A, Sanghvi M, Fung K, Paiva J, Petersen S, Lukaschuk E, Piechnik S, Neubauer S, Rueckert Det al., 2020, Improving the generalizability of convolutional neural network-based segmentation on CMR images, Frontiers in Cardiovascular Medicine, ISSN: 2297-055X

Journal article

Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert Det al., 2020, Deep learning for cardiac image segmentation: A review, Frontiers in Cardiovascular Medicine, Vol: 7, Pages: 1-33, ISSN: 2297-055X

Deep learning has become the most widely used approach for cardiac imagesegmentation in recent years. In this paper, we provide a review of over 100cardiac image segmentation papers using deep learning, which covers commonimaging modalities including magnetic resonance imaging (MRI), computedtomography (CT), and ultrasound (US) and major anatomical structures ofinterest (ventricles, atria and vessels). In addition, a summary of publiclyavailable cardiac image datasets and code repositories are included to providea base for encouraging reproducible research. Finally, we discuss thechallenges and limitations with current deep learning-based approaches(scarcity of labels, model generalizability across different domains,interpretability) and suggest potential directions for future research.

Journal article

Chen C, Ouyang C, Tarroni G, Schlemper J, Qiu H, Bai W, Rueckert Det al., 2020, Unsupervised multi-modal style transfer for cardiac MR segmentation, MICCAI STACOM Workshop, Publisher: Springer International Publishing, Pages: 209-219, ISSN: 0302-9743

In this work, we present a fully automatic method to segment cardiac structures from late-gadolinium enhanced (LGE) images without using labelled LGE data for training, but instead by transferring the anatomical knowledge and features learned on annotated balanced steady-state free precession (bSSFP) images, which are easier to acquire. Our framework mainly consists of two neural networks: a multi-modal image translation network for style transfer and a cascaded segmentation network for image segmentation. The multi-modal image translation network generates realistic and diverse synthetic LGE images conditioned on a single annotated bSSFP image, forming a synthetic LGE training set. This set is then utilized to fine-tune the segmentation network pre-trained on labelled bSSFP images, achieving the goal of unsupervised LGE image segmentation. In particular, the proposed cascaded segmentation network is able to produce accurate segmentation by taking both shape prior and image appearance into account, achieving an average Dice score of 0.92 for the left ventricle, 0.83 for the myocardium, and 0.88 for the right ventricle on the test set.

Conference paper

Chen C, Biffi C, Tarroni G, Petersen S, Bai W, Rueckert Det al., 2019, Learning shape priors for robust cardiac MR segmentation from multi-view images, International Conference on Medical Image Computing and Computer-Assisted Intervention, Publisher: Springer International Publishing, Pages: 523-531, ISSN: 0302-9743

Cardiac MR image segmentation is essential for the morphological and functional analysis of the heart. Inspired by how experienced clinicians assess the cardiac morphology and function across multiple standard views (i.e. long- and short-axis views), we propose a novel approach which learns anatomical shape priors across different 2D standard views and leverages these priors to segment the left ventricular (LV) myocardium from short-axis MR image stacks. The proposed segmentation method has the advantage of being a 2D network but at the same time incorporates spatial context from multiple, complementary views that span a 3D space. Our method achieves accurate and robust segmentation of the myocardium across different short-axis slices (from apex to base), outperforming baseline models (e.g. 2D U-Net, 3D U-Net) while achieving higher data efficiency. Compared to the 2D U-Net, the proposed method reduces the mean Hausdorff distance (mm) from 3.24 to 2.49 on the apical slices, from 2.34 to 2.09 on the middle slices and from 3.62 to 2.76 on the basal slices on the test set, when only 10% of the training data was used.

Conference paper

Bai W, Chen C, Tarroni G, Duan J, Guitton F, Petersen SE, Guo Y, Matthews PM, Rueckert Det al., 2019, Self-supervised learning for cardiac MR image segmentation by anatomicalposition prediction, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)

In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manuallyannotated data, which is expensive to acquire and limited by the availableresources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net.

Conference paper

Chen C, Bai W, Davies RH, Bhuva AN, Manisty C, Moon JC, Aung N, Lee AM, Sanghvi MM, Fung K, Paiva JM, Petersen SE, Lukaschuk E, Piechnik SK, Neubauer S, Rueckert Det al., 2019, Improving the generalizability of convolutional neural network-based segmentation on CMR images, Publisher: arXiv

Convolutional neural network (CNN) based segmentation methods provide anefficient and automated way for clinicians to assess the structure and functionof the heart in cardiac MR images. While CNNs can generally perform thesegmentation tasks with high accuracy when training and test images come fromthe same domain (e.g. same scanner or site), their performance often degradesdramatically on images from different scanners or clinical sites. We propose asimple yet effective way for improving the network generalization ability bycarefully designing data normalization and augmentation strategies toaccommodate common scenarios in multi-site, multi-scanner clinical imaging datasets. We demonstrate that a neural network trained on a single-sitesingle-scanner dataset from the UK Biobank can be successfully applied tosegmenting cardiac MR images across different sites and different scannerswithout substantial loss of accuracy. Specifically, the method was trained on alarge set of 3,975 subjects from the UK Biobank. It was then directly tested on600 different subjects from the UK Biobank for intra-domain testing and twoother sets for cross-domain testing: the ACDC dataset (100 subjects, 1 site, 2scanners) and the BSCMR-AS dataset (599 subjects, 6 sites, 9 scanners). Theproposed method produces promising segmentation results on the UK Biobank testset which are comparable to previously reported values in the literature, whilealso performing well on cross-domain test sets, achieving a mean Dice metric of0.90 for the left ventricle, 0.81 for the myocardium and 0.82 for the rightventricle on the ACDC dataset; and 0.89 for the left ventricle, 0.83 for themyocardium on the BSCMR-AS dataset. The proposed method offers a potentialsolution to improve CNN-based model generalizability for the cross-scanner andcross-site cardiac MR image segmentation task.

Working paper

Li Z, Hou Z, Chen C, Hao Z, An Y, Liang S, Lu Bet al., 2019, Automatic Cardiothoracic Ratio Calculation With Deep Learning, IEEE ACCESS, Vol: 7, Pages: 37749-37756, ISSN: 2169-3536

Journal article

Chen C, Bai W, Rueckert D, 2019, Multi-task learning for left atrial segmentation on GE-MRI, International Workshop on Statistical Atlases and Computational Models of the Heart, Publisher: Springer Verlag, Pages: 292-301, ISSN: 0302-9743

Segmentation of the left atrium (LA) is crucial for assessing its anatomy in both pre-operative atrial fibrillation (AF) ablation planning and post-operative follow-up studies. In this paper, we present a fully automated framework for left atrial segmentation in gadolinium-enhanced magnetic resonance images (GE-MRI) based on deep learning. We propose a fully convolutional neural network and explore the benefits of multi-task learning for performing both atrial segmentation and pre/post ablation classification. Our results show that, by sharing features between related tasks, the network can gain additional anatomical information and achieve more accurate atrial segmentation, leading to a mean Dice score of 0.901 on a test set of 20 3D MRI images. Code of our proposed algorithm is available at https://github.com/cherise215/atria_segmentation_2018/.

Conference paper

Chen C, Qin C, Ouyang C, Wang S, Qiu H, Chen L, Tarroni G, Bai W, Rueckert Det al., Enhancing MR Image Segmentation with Realistic Adversarial Data Augmentation

The success of neural networks on medical image segmentation tasks typicallyrelies on large labeled datasets for model training. However, acquiring andmanually labeling a large medical image set is resource-intensive, expensive,and sometimes impractical due to data sharing and privacy issues. To addressthis challenge, we propose an adversarial data augmentation approach to improvethe efficiency in utilizing training data and to enlarge the dataset viasimulated but realistic transformations. Specifically, we present a generictask-driven learning framework, which jointly optimizes a data augmentationmodel and a segmentation network during training, generating informativeexamples to enhance network generalizability for the downstream task. The dataaugmentation model utilizes a set of photometric and geometric imagetransformations and chains them to simulate realistic complex imagingvariations that could exist in magnetic resonance (MR) imaging. The proposedadversarial data augmentation does not rely on generative networks and can beused as a plug-in module in general segmentation networks. It iscomputationally efficient and applicable for both supervised andsemi-supervised learning. We analyze and evaluate the method on two MR imagesegmentation tasks: cardiac segmentation and prostate segmentation. Resultsshow that the proposed approach can alleviate the need for labeled data whileimproving model generalization ability, indicating its practical value inmedical imaging applications.

Journal article

Zhuang X, Xu J, Luo X, Chen C, Ouyang C, Rueckert D, Campello VM, Lekadir K, Vesal S, RaviKumar N, Liu Y, Luo G, Chen J, Li H, Ly B, Sermesant M, Roth H, Zhu W, Wang J, Ding X, Wang X, Yang S, Li Let al., Cardiac Segmentation on Late Gadolinium Enhancement MRI: A Benchmark Study from Multi-Sequence Cardiac MR Segmentation Challenge

Accurate computing, analysis and modeling of the ventricles and myocardiumfrom medical images are important, especially in the diagnosis and treatmentmanagement for patients suffering from myocardial infarction (MI). Lategadolinium enhancement (LGE) cardiac magnetic resonance (CMR) provides animportant protocol to visualize MI. However, automated segmentation of LGE CMRis still challenging, due to the indistinguishable boundaries, heterogeneousintensity distribution and complex enhancement patterns of pathologicalmyocardium from LGE CMR. Furthermore, compared with the other sequences LGE CMRimages with gold standard labels are particularly limited, which representsanother obstacle for developing novel algorithms for automatic segmentation ofLGE CMR. This paper presents the selective results from the Multi-SequenceCardiac MR (MS-CMR) Segmentation challenge, in conjunction with MICCAI 2019.The challenge offered a data set of paired MS-CMR images, including auxiliaryCMR sequences as well as LGE CMR, from 45 patients who underwentcardiomyopathy. It was aimed to develop new algorithms, as well as benchmarkexisting ones for LGE CMR segmentation and compare them objectively. Inaddition, the paired MS-CMR images could enable algorithms to combine thecomplementary information from the other sequences for the segmentation of LGECMR. Nine representative works were selected for evaluation and comparisons,among which three methods are unsupervised methods and the other six aresupervised. The results showed that the average performance of the nine methodswas comparable to the inter-observer variations. The success of these methodswas mainly attributed to the inclusion of the auxiliary sequences from theMS-CMR images, which provide important label information for the training ofdeep neural networks.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01081407&limit=30&person=true