Imperial College London

Dr Chen (Cherise) Chen

Faculty of EngineeringDepartment of Computing

Honorary Research Associate
 
 
 
//

Contact

 

chen.chen15 Website

 
 
//

Location

 

344Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

30 results found

Kreitner L, Paetzold JC, Rauch N, Chen C, Hagag AM, Fayed AE, Sivaprasad S, Rausch S, Weichsel J, Menze BH, Harders M, Knier B, Rueckert D, Menten MJet al., 2024, Synthetic optical coherence tomography angiographs for detailed retinal vessel segmentation without human annotations., IEEE Trans Med Imaging, Vol: PP

Optical coherence tomography angiography (OCTA) is a non-invasive imaging modality that can acquire high-resolution volumes of the retinal vasculature and aid the diagnosis of ocular, neurological and cardiac diseases. Segmenting the visible blood vessels is a common first step when extracting quantitative biomarkers from these images. Classical segmentation algorithms based on thresholding are strongly affected by image artifacts and limited signal-to-noise ratio. The use of modern, deep learning-based segmentation methods has been inhibited by a lack of large datasets with detailed annotations of the blood vessels. To address this issue, recent work has employed transfer learning, where a segmentation network is trained on synthetic OCTA images and is then applied to real data. However, the previously proposed simulations fail to faithfully model the retinal vasculature and do not provide effective domain adaptation. Because of this, current methods are unable to fully segment the retinal vasculature, in particular the smallest capillaries. In this work, we present a lightweight simulation of the retinal vascular network based on space colonization for faster and more realistic OCTA synthesis. We then introduce three contrast adaptation pipelines to decrease the domain gap between real and artificial images. We demonstrate the superior segmentation performance of our approach in extensive quantitative and qualitative experiments on three public datasets that compare our method to traditional computer vision algorithms and supervised training using human annotations. Finally, we make our entire pipeline publicly available, including the source code, pretrained models, and a large dataset of synthetic OCTA images.

Journal article

Dawood T, Chen C, Sidhu BS, Ruijsink B, Gould J, Porter B, Elliott MK, Mehta V, Rinaldi CA, Puyol-Anton E, Razavi R, King APet al., 2023, Uncertainty aware training to improve deep learning model calibration for classification of cardiac MR images, MEDICAL IMAGE ANALYSIS, Vol: 88, ISSN: 1361-8415

Journal article

Li Z, Kamnitsas K, Ouyang C, Chen C, Glocker Bet al., 2023, Context label learning: improving background class representations in semantic segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 1885-1896, ISSN: 0278-0062

Background samples provide key contextual information for segmenting regionsof interest (ROIs). However, they always cover a diverse set of structures,causing difficulties for the segmentation model to learn good decisionboundaries with high sensitivity and precision. The issue concerns the highlyheterogeneous nature of the background class, resulting in multi-modaldistributions. Empirically, we find that neural networks trained withheterogeneous background struggle to map the corresponding contextual samplesto compact clusters in feature space. As a result, the distribution overbackground logit activations may shift across the decision boundary, leading tosystematic over-segmentation across different datasets and tasks. In thisstudy, we propose context label learning (CoLab) to improve the contextrepresentations by decomposing the background class into several subclasses.Specifically, we train an auxiliary network as a task generator, along with theprimary segmentation model, to automatically generate context labels thatpositively affect the ROI segmentation accuracy. Extensive experiments areconducted on several challenging segmentation tasks and datasets. The resultsdemonstrate that CoLab can guide the segmentation model to map the logits ofbackground samples away from the decision boundary, resulting in significantlyimproved segmentation accuracy. Code is available.

Journal article

Qin C, Wang S, Chen C, Bai W, Rueckert Det al., 2023, Generative myocardial motion tracking via latent space exploration with biomechanics-informed prior, MEDICAL IMAGE ANALYSIS, Vol: 83, ISSN: 1361-8415

Journal article

Ouyang C, Chen C, Li S, Li Z, Qin C, Bai W, Rueckert Det al., 2022, Causality-inspired single-source domain generalization for medical image segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 1095-1106, ISSN: 0278-0062

Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.

Journal article

Chen C, Qin C, Ouyang C, Li Z, Wang S, Qiu H, Chen L, Tarroni G, Bai W, Rueckert Det al., 2022, Enhancing MR image segmentation with realistic adversarial data augmentation, Medical Image Analysis, Vol: 82, Pages: 1-15, ISSN: 1361-8415

The success of neural networks on medical image segmentation tasks typicallyrelies on large labeled datasets for model training. However, acquiring andmanually labeling a large medical image set is resource-intensive, expensive,and sometimes impractical due to data sharing and privacy issues. To addressthis challenge, we propose AdvChain, a generic adversarial data augmentationframework, aiming at improving both the diversity and effectiveness of trainingdata for medical image segmentation tasks. AdvChain augments data with dynamicdata augmentation, generating randomly chained photo-metric and geometrictransformations to resemble realistic yet challenging imaging variations toexpand training data. By jointly optimizing the data augmentation model and asegmentation network during training, challenging examples are generated toenhance network generalizability for the downstream task. The proposedadversarial data augmentation does not rely on generative networks and can beused as a plug-in module in general segmentation networks. It iscomputationally efficient and applicable for both low-shot supervised andsemi-supervised learning. We analyze and evaluate the method on two MR imagesegmentation tasks: cardiac segmentation and prostate segmentation with limitedlabeled data. Results show that the proposed approach can alleviate the needfor labeled data while improving model generalization ability, indicating itspractical value in medical imaging applications.

Journal article

Zhuang X, Xu J, Luo X, Chen C, Ouyang C, Rueckert D, Campello VM, Lekadir K, Vesal S, RaviKumar N, Liu Y, Luo G, Chen J, Li H, Ly B, Sermesant M, Roth H, Zhu W, Wang J, Ding X, Wang X, Yang S, Li Let al., 2022, Cardiac segmentation on late gadolinium enhancement MRI: A benchmark study from multi-sequence cardiac MR segmentation challenge, MEDICAL IMAGE ANALYSIS, Vol: 81, ISSN: 1361-8415

Journal article

Li Z, Kamnitsas K, Islam M, Chen C, Glocker Bet al., 2022, Estimating model performance under domain shifts with class-specific confidence scores, MICCAI 2022 25th International Conference, Publisher: Springer Nature Switzerland, Pages: 693-703, ISSN: 0302-9743

Machine learning models are typically deployed in a test setting that differs from the training setting, potentially leading to decreased model performance because of domain shift. If we could estimate the performance that a pre-trained model would achieve on data from a specific deployment setting, for example a certain clinic, we could judge whether the model could safely be deployed or if its performance degrades unacceptably on the specific data. Existing approaches estimate this based on the confidence of predictions made on unlabeled test data from the deployment’s domain. We find existing methods struggle with data that present class imbalance, because the methods used to calibrate confidence do not account for bias induced by class imbalance, consequently failing to estimate class-wise accuracy. Here, we introduce class-wise calibration within the framework of performance estimation for imbalanced datasets. Specifically, we derive class-specific modifications of state-of-the-art confidence-based model evaluation methods including temperature scaling (TS), difference of confidences (DoC), and average thresholded confidence (ATC). We also extend the methods to estimate Dice similarity coefficient (DSC) in image segmentation. We conduct experiments on four tasks and find the proposed modifications consistently improve the estimation accuracy for imbalanced datasets. Our methods improve accuracy estimation by 18% in classification under natural domain shifts, and double the estimation accuracy on segmentation tasks, when compared with prior methods (Code is available at https://github.com/ZerojumpLine/ModelEvaluationUnderClassImbalance).

Conference paper

Chen C, Li Z, Ouyang C, Sinclair M, Bai W, Rueckert Det al., 2022, MaxStyle: adversarial style composition for robust medical image segmentation, Medical Image Computing and Computer Assisted Interventions (MICCAI) 2022, Publisher: Springer, Pages: 151-161

Convolutional neural networks (CNNs) have achieved remarkable segmentationaccuracy on benchmark datasets where training and test sets are from the samedomain, yet their performance can degrade significantly on unseen domains,which hinders the deployment of CNNs in many clinical scenarios. Most existingworks improve model out-of-domain (OOD) robustness by collecting multi-domaindatasets for training, which is expensive and may not always be feasible due toprivacy and logistical issues. In this work, we focus on improving modelrobustness using a single-domain dataset only. We propose a novel dataaugmentation framework called MaxStyle, which maximizes the effectiveness ofstyle augmentation for model OOD performance. It attaches an auxiliarystyle-augmented image decoder to a segmentation network for robust featurelearning and data augmentation. Importantly, MaxStyle augments data withimproved image style diversity and hardness, by expanding the style space withnoise and searching for the worst-case style composition of latent features viaadversarial training. With extensive experiments on multiple public cardiac andprostate MR datasets, we demonstrate that MaxStyle leads to significantlyimproved out-of-distribution robustness against unseen corruptions as well ascommon distribution shifts across multiple, different, unseen sites and unknownimage sequences under both low- and high-training data settings. The code canbe found at https://github.com/cherise215/MaxStyle.

Conference paper

Ouyang C, Biffi C, Chen C, Kart T, Qiu H, Rueckert Det al., 2022, Self-supervised learning for few-shot medical image segmentation, IEEE Transactions on Medical Imaging, Vol: 41, Pages: 1837-1848, ISSN: 0278-0062

Fully-supervised deep learning segmentation models are inflexible when encountering new unseen semantic classes and their fine-tuning often requires significant amounts of annotated data. Few-shot semantic segmentation (FSS) aims to solve this inflexibility by learning to segment an arbitrary unseen semantically meaningful class by referring to only a few labeled examples, without involving fine-tuning. State-of-the-art FSS methods are typically designed for segmenting natural images and rely on abundant annotated data of training classes to learn image representations that generalize well to unseen testing classes. However, such a training mechanism is impractical in annotation-scarce medical imaging scenarios. To address this challenge, in this work, we propose a novel self-supervised FSS framework for medical images, named SSL-ALPNet, in order to bypass the requirement for annotations during training. The proposed method exploits superpixel-based pseudo-labels to provide supervision signals. In addition, we propose a simple yet effective adaptive local prototype pooling module which is plugged into the prototype networks to further boost segmentation accuracy. We demonstrate the general applicability of the proposed approach using three different tasks: organ segmentation of abdominal CT and MRI images respectively, and cardiac segmentation of MRI images. The proposed method yields higher Dice scores than conventional FSS methods which require manual annotations for training in our experiments.

Journal article

Dawood T, Chen C, Andlauer R, Sidhu BS, Ruijsink B, Gould J, Porter B, Elliott M, Mehta V, Rinaldi CA, Puyol-Antón E, Razavi R, King APet al., 2022, Uncertainty-Aware Training for Cardiac Resynchronisation Therapy Response Prediction, Pages: 189-198, ISBN: 9783030937218

Evaluation of predictive deep learning (DL) models beyond conventional performance metrics has become increasingly important for applications in sensitive environments like healthcare. Such models might have the capability to encode and analyse large sets of data but they often lack comprehensive interpretability methods, preventing clinical trust in predictive outcomes. Quantifying uncertainty of a prediction is one way to provide such interpretability and promote trust. However, relatively little attention has been paid to how to include such requirements into the training of the model. In this paper we: (i) quantify the data (aleatoric) and model (epistemic) uncertainty of a DL model for Cardiac Resynchronisation Therapy response prediction from cardiac magnetic resonance images, and (ii) propose and perform a preliminary investigation of an uncertainty-aware loss function that can be used to retrain an existing DL image-based classification model to encourage confidence in correct predictions and reduce confidence in incorrect predictions. Our initial results are promising, showing a significant increase in the (epistemic) confidence of true positive predictions, with some evidence of a reduction in false negative confidence.

Book chapter

Qiu H, Hammernik K, Qin C, Chen C, Rueckert Det al., 2022, Embedding Gradient-Based Optimization in Image Registration Networks, MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VI, Vol: 13436, Pages: 56-65, ISSN: 0302-9743

Journal article

Ouyang C, Wang S, Chen C, Li Z, Bai W, Kainz B, Rueckert Det al., 2022, Improved Post-hoc Probability Calibration for Out-of-Domain MRI Segmentation, 4th International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging (UNSURE), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 59-69, ISSN: 0302-9743

Conference paper

Chen C, Hammernik K, Ouyang C, Qin C, Bai W, Rueckert Det al., 2021, Cooperative training and latent space data augmentation for robust medical image segmentation, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)

Conference paper

Wang S, Qin C, Savioli N, Chen C, O'Regan D, Cook S, Guo Y, Rueckert D, Bai Wet al., 2021, Joint motion correction and super resolution for cardiac segmentationvia latent optimisation, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer, Pages: 14-24

In cardiac magnetic resonance (CMR) imaging, a 3D high-resolution segmentation of the heart is essential for detailed description of its anatomical structures. However, due to the limit of acquisition duration andrespiratory/cardiac motion, stacks of multi-slice 2D images are acquired inclinical routine. The segmentation of these images provides a low-resolution representation of cardiac anatomy, which may contain artefacts caused by motion. Here we propose a novel latent optimisation framework that jointly performs motion correction and super resolution for cardiac image segmentations. Given a low-resolution segmentation as input, the framework accounts for inter-slice motion in cardiac MR imaging and super-resolves the input into a high-resolution segmentation consistent with input. A multi-view loss is incorporated to leverage information from both short-axis view and long-axis view of cardiac imaging. To solve the inverse problem, iterative optimisation is performed in a latent space, which ensures the anatomical plausibility. This alleviates the need of paired low-resolution and high-resolution images for supervised learning. Experiments on two cardiac MR datasets show that the proposed framework achieves high performance, comparable to state-of-the-art super-resolution approaches and with better cross-domain generalisability and anatomical plausibility.

Conference paper

Xiong Z, Xia Q, Hu Z, Huang N, Bian C, Zheng Y, Vesal S, Ravikumar N, Maier A, Yang X, Heng P-A, Ni D, Li C, Tong Q, Si W, Puybareau E, Khoudli Y, Geraud T, Chen C, Bai W, Rueckert D, Xu L, Zhuang X, Luo X, Jia S, Sermesant M, Liu Y, Wang K, Borra D, Masci A, Corsi C, de Vente C, Veta M, Karim R, Preetha CJ, Engelhardt S, Qiao M, Wang Y, Tao Q, Nunez-Garcia M, Camara O, Savioli N, Lamata P, Zhao Jet al., 2021, A global benchmark of algorithms for segmenting the left atrium from late gadolinium-enhanced cardiac magnetic resonance imaging, Medical Image Analysis, Vol: 67, Pages: 1-14, ISSN: 1361-8415

Segmentation of medical images, particularly late gadolinium-enhanced magnetic resonance imaging (LGE-MRI) used for visualizing diseased atrial structures, is a crucial first step for ablation treatment of atrial fibrillation. However, direct segmentation of LGE-MRIs is challenging due to the varying intensities caused by contrast agents. Since most clinical studies have relied on manual, labor-intensive approaches, automatic methods are of high interest, particularly optimized machine learning approaches. To address this, we organized the 2018 Left Atrium Segmentation Challenge using 154 3D LGE-MRIs, currently the world's largest atrial LGE-MRI dataset, and associated labels of the left atrium segmented by three medical experts, ultimately attracting the participation of 27 international teams. In this paper, extensive analysis of the submitted algorithms using technical and biological metrics was performed by undergoing subgroup analysis and conducting hyper-parameter analysis, offering an overall picture of the major design choices of convolutional neural networks (CNNs) and practical considerations for achieving state-of-the-art left atrium segmentation. Results show that the top method achieved a Dice score of 93.2% and a mean surface to surface distance of 0.7 mm, significantly outperforming prior state-of-the-art. Particularly, our analysis demonstrated that double sequentially used CNNs, in which a first CNN is used for automatic region-of-interest localization and a subsequent CNN is used for refined regional segmentation, achieved superior results than traditional methods and machine learning approaches containing single CNNs. This large-scale benchmarking study makes a significant step towards much-improved segmentation methods for atrial LGE-MRIs, and will serve as an important benchmark for evaluating and comparing the future works in the field. Furthermore, the findings from this study can potentially be extended to other imaging datasets and modalitie

Journal article

Le EP, Evans NR, Tarkin JM, Chowdhury MM, Zaccagna F, Wall C, Huang Y, Weir-Mccall JR, Chen C, Warburton EA, Schonlieb CB, Sala E, Rudd JHFet al., 2020, Contrast CT classification of asymptomatic and symptomatic carotids in stroke and transient ischaemic attack with deep learning and interpretability, European-Society-of-Cardiology (ESC) Congress, Publisher: OXFORD UNIV PRESS, Pages: 2418-2418, ISSN: 0195-668X

Conference paper

Qin C, Wang S, Chen C, Qiu H, Bai W, Rueckert Det al., 2020, Biomechanics-informed neural networks for myocardial motion tracking in MRI, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer International Publishing, Pages: 296-306, ISSN: 0302-9743

Image registration is an ill-posed inverse problem which often requires regularisation on the solution space. In contrast to most of the current approaches which impose explicit regularisation terms such as smoothness, in this paper we propose a novel method that can implicitly learn biomechanics-informed regularisation. Such an approach can incorporate application-specific prior knowledge into deep learning based registration. Particularly, the proposed biomechanics-informed regularisation leverages a variational autoencoder (VAE) to learn a manifold for biomechanically plausible deformations and to implicitly capture their underlying properties via reconstructing biomechanical simulations. The learnt VAE regulariser then can be coupled with any deep learning based registration network to regularise the solution space to be biomechanically plausible. The proposed method is validated in the context of myocardial motion tracking on 2D stacks of cardiac MRI data from two different datasets. The results show that it can achieve better performance against other competing methods in terms of motion tracking accuracy and has the ability to learn biomechanical properties such as incompressibility and strains. The method has also been shown to have better generalisability to unseen domains compared with commonly used L2 regularisation schemes.

Conference paper

Wang S, Tarroni G, Qin C, Mo Y, Dai C, Chen C, Glocker B, Guo Y, Rueckert D, Bai Wet al., 2020, Deep generative model-based quality control for cardiac MRI segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 88-97, ISSN: 0302-9743

In recent years, convolutional neural networks have demonstrated promising performance in a variety of medical image segmentation tasks. However, when a trained segmentation model is deployed into the real clinical world, the model may not perform optimally. A major challenge is the potential poor-quality segmentations generated due to degraded image quality or domain shift issues. There is a timely need to develop an automated quality control method that can detect poor segmentations and feedback to clinicians. Here we propose a novel deep generative model-based framework for quality control of cardiac MRI segmentation. It first learns a manifold of good-quality image-segmentation pairs using a generative model. The quality of a given test segmentation is then assessed by evaluating the difference from its projection onto the good-quality manifold. In particular, the projection is refined through iterative search in the latent space. The proposed method achieves high prediction accuracy on two publicly available cardiac MRI datasets. Moreover, it shows better generalisation ability than traditional regression-based methods. Our approach provides a real-time and model-agnostic quality control for cardiac MRI segmentation, which has the potential to be integrated into clinical image analysis workflows.

Conference paper

Chen C, Qin C, Qiu H, Ouyang C, Wang S, Chen L, Tarroni G, Bai W, Rueckert Det al., 2020, Realistic adversarial data augmentation for MR image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)

Neural network-based approaches can achieve high accuracy in various medicalimage segmentation tasks. However, they generally require large labelleddatasets for supervised learning. Acquiring and manually labelling a largemedical dataset is expensive and sometimes impractical due to data sharing andprivacy issues. In this work, we propose an adversarial data augmentationmethod for training neural networks for medical image segmentation. Instead ofgenerating pixel-wise adversarial attacks, our model generates plausible andrealistic signal corruptions, which models the intensity inhomogeneities causedby a common type of artefacts in MR imaging: bias field. The proposed methoddoes not rely on generative networks, and can be used as a plug-in module forgeneral segmentation networks in both supervised and semi-supervised learning.Using cardiac MR imaging we show that such an approach can improve thegeneralization ability and robustness of models as well as provide significantimprovements in low-data scenarios.

Conference paper

Chen C, Bai W, Davies R, Bhuva A, Manisty C, Moon J, Aung N, Lee A, Sanghvi M, Fung K, Paiva J, Petersen S, Lukaschuk E, Piechnik S, Neubauer S, Rueckert Det al., 2020, Improving the generalizability of convolutional neural network-based segmentation on CMR images, Frontiers in Cardiovascular Medicine, ISSN: 2297-055X

Journal article

Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert Det al., 2020, Deep learning for cardiac image segmentation: A review, Frontiers in Cardiovascular Medicine, Vol: 7, Pages: 1-33, ISSN: 2297-055X

Deep learning has become the most widely used approach for cardiac imagesegmentation in recent years. In this paper, we provide a review of over 100cardiac image segmentation papers using deep learning, which covers commonimaging modalities including magnetic resonance imaging (MRI), computedtomography (CT), and ultrasound (US) and major anatomical structures ofinterest (ventricles, atria and vessels). In addition, a summary of publiclyavailable cardiac image datasets and code repositories are included to providea base for encouraging reproducible research. Finally, we discuss thechallenges and limitations with current deep learning-based approaches(scarcity of labels, model generalizability across different domains,interpretability) and suggest potential directions for future research.

Journal article

Chen C, Ouyang C, Tarroni G, Schlemper J, Qiu H, Bai W, Rueckert Det al., 2020, Unsupervised multi-modal style transfer for cardiac MR segmentation, MICCAI STACOM Workshop, Publisher: Springer International Publishing, Pages: 209-219, ISSN: 0302-9743

In this work, we present a fully automatic method to segment cardiac structures from late-gadolinium enhanced (LGE) images without using labelled LGE data for training, but instead by transferring the anatomical knowledge and features learned on annotated balanced steady-state free precession (bSSFP) images, which are easier to acquire. Our framework mainly consists of two neural networks: a multi-modal image translation network for style transfer and a cascaded segmentation network for image segmentation. The multi-modal image translation network generates realistic and diverse synthetic LGE images conditioned on a single annotated bSSFP image, forming a synthetic LGE training set. This set is then utilized to fine-tune the segmentation network pre-trained on labelled bSSFP images, achieving the goal of unsupervised LGE image segmentation. In particular, the proposed cascaded segmentation network is able to produce accurate segmentation by taking both shape prior and image appearance into account, achieving an average Dice score of 0.92 for the left ventricle, 0.83 for the myocardium, and 0.88 for the right ventricle on the test set.

Conference paper

Ouyang C, Biffi C, Chen C, Kart T, Qiu H, Rueckert Det al., 2020, Self-supervision with Superpixels: Training Few-Shot Medical Image Segmentation Without Annotation, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 12374 LNCS, Pages: 762-780, ISSN: 0302-9743

Few-shot semantic segmentation (FSS) has great potential for medical imaging applications. Most of the existing FSS techniques require abundant annotated semantic classes for training. However, these methods may not be applicable for medical images due to the lack of annotations. To address this problem we make several contributions: (1) A novel self-supervised FSS framework for medical images in order to eliminate the requirement for annotations during training. Additionally, superpixel-based pseudo-labels are generated to provide supervision; (2) An adaptive local prototype pooling module plugged into prototypical networks, to solve the common challenging foreground-background imbalance problem in medical image segmentation; (3) We demonstrate the general applicability of the proposed approach for medical images using three different tasks: abdominal organ segmentation for CT and MRI, as well as cardiac segmentation for MRI. Our results show that, for medical image segmentation, the proposed method outperforms conventional FSS methods which require manual annotations for training.

Journal article

Puyol-Antón E, Chen C, Clough JR, Ruijsink B, Sidhu BS, Gould J, Porter B, Elliott M, Mehta V, Rueckert D, Rinaldi CA, King APet al., 2020, Interpretable Deep Models for Cardiac Resynchronisation Therapy Response Prediction., Pages: 284-293

Advances in deep learning (DL) have resulted in impressive accuracy in some medical image classification tasks, but often deep models lack interpretability. The ability of these models to explain their decisions is important for fostering clinical trust and facilitating clinical translation. Furthermore, for many problems in medicine there is a wealth of existing clinical knowledge to draw upon, which may be useful in generating explanations, but it is not obvious how this knowledge can be encoded into DL models - most models are learnt either from scratch or using transfer learning from a different domain. In this paper we address both of these issues. We propose a novel DL framework for image-based classification based on a variational autoencoder (VAE). The framework allows prediction of the output of interest from the latent space of the autoencoder, as well as visualisation (in the image domain) of the effects of crossing the decision boundary, thus enhancing the interpretability of the classifier. Our key contribution is that the VAE disentangles the latent space based on 'explanations' drawn from existing clinical knowledge. The framework can predict outputs as well as explanations for these outputs, and also raises the possibility of discovering new biomarkers that are separate (or disentangled) from the existing knowledge. We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images. The sensitivity and specificity of the proposed model on the task of CRT response prediction are 88.43% and 84.39% respectively, and we showcase the potential of our model in enhancing understanding of the factors contributing to CRT response.

Conference paper

Chen C, Biffi C, Tarroni G, Petersen S, Bai W, Rueckert Det al., 2019, Learning shape priors for robust cardiac MR segmentation from multi-view images, International Conference on Medical Image Computing and Computer-Assisted Intervention, Publisher: Springer International Publishing, Pages: 523-531, ISSN: 0302-9743

Cardiac MR image segmentation is essential for the morphological and functional analysis of the heart. Inspired by how experienced clinicians assess the cardiac morphology and function across multiple standard views (i.e. long- and short-axis views), we propose a novel approach which learns anatomical shape priors across different 2D standard views and leverages these priors to segment the left ventricular (LV) myocardium from short-axis MR image stacks. The proposed segmentation method has the advantage of being a 2D network but at the same time incorporates spatial context from multiple, complementary views that span a 3D space. Our method achieves accurate and robust segmentation of the myocardium across different short-axis slices (from apex to base), outperforming baseline models (e.g. 2D U-Net, 3D U-Net) while achieving higher data efficiency. Compared to the 2D U-Net, the proposed method reduces the mean Hausdorff distance (mm) from 3.24 to 2.49 on the apical slices, from 2.34 to 2.09 on the middle slices and from 3.62 to 2.76 on the basal slices on the test set, when only 10% of the training data was used.

Conference paper

Bai W, Chen C, Tarroni G, Duan J, Guitton F, Petersen SE, Guo Y, Matthews PM, Rueckert Det al., 2019, Self-supervised learning for cardiac MR image segmentation by anatomicalposition prediction, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)

In the recent years, convolutional neural networks have transformed the field of medical image analysis due to their capacity to learn discriminative image features for a variety of classification and regression tasks. However, successfully learning these features requires a large amount of manuallyannotated data, which is expensive to acquire and limited by the availableresources of expert image analysts. Therefore, unsupervised, weakly-supervised and self-supervised feature learning techniques receive a lot of attention, which aim to utilise the vast amount of available data, while at the same time avoid or substantially reduce the effort of manual annotation. In this paper, we propose a novel way for training a cardiac MR image segmentation network, in which features are learnt in a self-supervised manner by predicting anatomical positions. The anatomical positions serve as a supervisory signal and do not require extra manual annotation. We demonstrate that this seemingly simple task provides a strong signal for feature learning and with self-supervised learning, we achieve a high segmentation accuracy that is better than or comparable to a U-net trained from scratch, especially at a small data setting. When only five annotated subjects are available, the proposed method improves the mean Dice metric from 0.811 to 0.852 for short-axis image segmentation, compared to the baseline U-net.

Conference paper

Chen C, Bai W, Davies RH, Bhuva AN, Manisty C, Moon JC, Aung N, Lee AM, Sanghvi MM, Fung K, Paiva JM, Petersen SE, Lukaschuk E, Piechnik SK, Neubauer S, Rueckert Det al., 2019, Improving the generalizability of convolutional neural network-based segmentation on CMR images, Publisher: arXiv

Convolutional neural network (CNN) based segmentation methods provide anefficient and automated way for clinicians to assess the structure and functionof the heart in cardiac MR images. While CNNs can generally perform thesegmentation tasks with high accuracy when training and test images come fromthe same domain (e.g. same scanner or site), their performance often degradesdramatically on images from different scanners or clinical sites. We propose asimple yet effective way for improving the network generalization ability bycarefully designing data normalization and augmentation strategies toaccommodate common scenarios in multi-site, multi-scanner clinical imaging datasets. We demonstrate that a neural network trained on a single-sitesingle-scanner dataset from the UK Biobank can be successfully applied tosegmenting cardiac MR images across different sites and different scannerswithout substantial loss of accuracy. Specifically, the method was trained on alarge set of 3,975 subjects from the UK Biobank. It was then directly tested on600 different subjects from the UK Biobank for intra-domain testing and twoother sets for cross-domain testing: the ACDC dataset (100 subjects, 1 site, 2scanners) and the BSCMR-AS dataset (599 subjects, 6 sites, 9 scanners). Theproposed method produces promising segmentation results on the UK Biobank testset which are comparable to previously reported values in the literature, whilealso performing well on cross-domain test sets, achieving a mean Dice metric of0.90 for the left ventricle, 0.81 for the myocardium and 0.82 for the rightventricle on the ACDC dataset; and 0.89 for the left ventricle, 0.83 for themyocardium on the BSCMR-AS dataset. The proposed method offers a potentialsolution to improve CNN-based model generalizability for the cross-scanner andcross-site cardiac MR image segmentation task.

Working paper

Li Z, Hou Z, Chen C, Hao Z, An Y, Liang S, Lu Bet al., 2019, Automatic Cardiothoracic Ratio Calculation With Deep Learning, IEEE ACCESS, Vol: 7, Pages: 37749-37756, ISSN: 2169-3536

Journal article

Chen C, Bai W, Rueckert D, 2019, Multi-task learning for left atrial segmentation on GE-MRI, International Workshop on Statistical Atlases and Computational Models of the Heart, Publisher: Springer Verlag, Pages: 292-301, ISSN: 0302-9743

Segmentation of the left atrium (LA) is crucial for assessing its anatomy in both pre-operative atrial fibrillation (AF) ablation planning and post-operative follow-up studies. In this paper, we present a fully automated framework for left atrial segmentation in gadolinium-enhanced magnetic resonance images (GE-MRI) based on deep learning. We propose a fully convolutional neural network and explore the benefits of multi-task learning for performing both atrial segmentation and pre/post ablation classification. Our results show that, by sharing features between related tasks, the network can gain additional anatomical information and achieve more accurate atrial segmentation, leading to a mean Dice score of 0.901 on a test set of 20 3D MRI images. Code of our proposed algorithm is available at https://github.com/cherise215/atria_segmentation_2018/.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01081407&limit=30&person=true