Imperial College London

DrChenQin

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Lecturer
 
 
 
//

Contact

 

c.qin15 Website

 
 
//

Location

 

Translation & Innovation Hub BuildingWhite City Campus

//

Summary

 

Publications

Publication Type
Year
to

43 results found

Li Z, Kamnitsas K, Dou Q, Qin C, Glocker Bet al., 2023, Joint optimization of class-specific training- and test-time data augmentation in segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 3323-3335, ISSN: 0278-0062

This paper presents an effective and general data augmentation framework for medical image segmentation. We adopt a computationally efficient and data-efficient gradient-based meta-learning scheme to explicitly align the distribution of training and validation data which is used as a proxy for unseen test data. We improve the current data augmentation strategies with two core designs. First, we learn class-specific training-time data augmentation (TRA) effectively increasing the heterogeneity within the training subsets and tackling the class imbalance common in segmentation. Second, we jointly optimize TRA and test-time data augmentation (TEA), which are closely connected as both aim to align the training and test data distribution but were so far considered separately in previous works. We demonstrate the effectiveness of our method on four medical image segmentation tasks across different scenarios with two state-of-the-art segmentation models, DeepMedic and nnU-Net. Extensive experimentation shows that the proposed data augmentation framework can significantly and consistently improve the segmentation performance when compared to existing solutions. Code is publicly available at https://github.com/ZerojumpLine/JCSAugment.

Journal article

Lyu J, Li G, Wang C, Qin C, Wang S, Dou Q, Qin Jet al., 2023, Region-focused multi-view transformer-based generative adversarial network for cardiac cine MRI reconstruction, MEDICAL IMAGE ANALYSIS, Vol: 85, ISSN: 1361-8415

Journal article

Liu J, Qin C, Yaghoobi M, 2023, Coil-Agnostic Attention-Based Network for Parallel MRI Reconstruction, Pages: 168-184, ISSN: 0302-9743

Magnetic resonance imaging (MRI) is widely used in clinical diagnosis. However, as a slow imaging modality, the long scan time hinders its development in time-critical applications. The acquisition process can be accelerated by types of under-sampling strategies in k-space and reconstructing images from a few measurements. To reconstruct the image, many parallel imaging methods use the coil sensitivity maps to fold multiple coil images with model-based or deep learning-based estimation methods. However, they can potentially suffer from the inaccuracy of sensitivity estimation. In this work, we propose a novel coil-agnostic attention-based framework for multi-coil MRI reconstruction which completely avoids the sensitivity estimation and performs data consistency (DC) via a sensitivity-agnostic data aggregation consistency block (DACB). Experiments were performed on the FastMRI knee dataset and show that the proposed DACB and attention module-integrated framework outperforms other deep learning-based algorithms in terms of image quality and reconstruction accuracy. Ablation studies also indicate the superiority of DACB over conventional DC methods.

Conference paper

Wang Y, Qiu H, Qin C, 2023, Conditional Deformable Image Registration with Spatially-Variant and Adaptive Regularization, ISSN: 1945-7928

Deep learning-based image registration approaches have shown competitive performance and run-time advantages compared to conventional image registration methods. However, existing learning-based approaches mostly require to train separate models with respect to different regularization hyperparameters for manual hyperparameter searching and often do not allow spatially-variant regularization. In this work, we propose a learning-based registration approach based on a novel conditional spatially adaptive instance normalization (CSAIN) to address these challenges. The proposed method introduces a spatially-variant regularization and learns its effect of achieving spatially-adaptive regularization by conditioning the registration network on the hyperparameter matrix via CSAIN. This allows varying of spatially adaptive regularization at inference to obtain multiple plausible deformations with a single pre-trained model. Additionally, the proposed method enables automatic hyperparameter optimization to avoid manual hyperparameter searching. Experiments show that our proposed method outperforms the baseline approaches while achieving spatially-variant and adaptive regularization.

Conference paper

Qin C, Wang S, Chen C, Bai W, Rueckert Det al., 2023, Generative myocardial motion tracking via latent space exploration with biomechanics-informed prior, MEDICAL IMAGE ANALYSIS, Vol: 83, ISSN: 1361-8415

Journal article

Liu J, Qin C, Yaghoobi M, 2023, High-Fidelity MRI Reconstruction Using Adaptive Spatial Attention Selection and Deep Data Consistency Prior, IEEE TRANSACTIONS ON COMPUTATIONAL IMAGING, Vol: 9, Pages: 298-313, ISSN: 2573-0436

Journal article

Ouyang C, Chen C, Li S, Li Z, Qin C, Bai W, Rueckert Det al., 2022, Causality-inspired single-source domain generalization for medical image segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 1095-1106, ISSN: 0278-0062

Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.

Journal article

Chen C, Qin C, Ouyang C, Li Z, Wang S, Qiu H, Chen L, Tarroni G, Bai W, Rueckert Det al., 2022, Enhancing MR image segmentation with realistic adversarial data augmentation, Medical Image Analysis, Vol: 82, Pages: 1-15, ISSN: 1361-8415

The success of neural networks on medical image segmentation tasks typicallyrelies on large labeled datasets for model training. However, acquiring andmanually labeling a large medical image set is resource-intensive, expensive,and sometimes impractical due to data sharing and privacy issues. To addressthis challenge, we propose AdvChain, a generic adversarial data augmentationframework, aiming at improving both the diversity and effectiveness of trainingdata for medical image segmentation tasks. AdvChain augments data with dynamicdata augmentation, generating randomly chained photo-metric and geometrictransformations to resemble realistic yet challenging imaging variations toexpand training data. By jointly optimizing the data augmentation model and asegmentation network during training, challenging examples are generated toenhance network generalizability for the downstream task. The proposedadversarial data augmentation does not rely on generative networks and can beused as a plug-in module in general segmentation networks. It iscomputationally efficient and applicable for both low-shot supervised andsemi-supervised learning. We analyze and evaluate the method on two MR imagesegmentation tasks: cardiac segmentation and prostate segmentation with limitedlabeled data. Results show that the proposed approach can alleviate the needfor labeled data while improving model generalization ability, indicating itspractical value in medical imaging applications.

Journal article

Meng Q, Bai W, Liu T, Simoes Monteiro de Marvao A, O'Regan D, Rueckert Det al., 2022, MulViMotion: shape-aware 3D myocardial motion tracking from multi-view cardiac MRI, IEEE Transactions on Medical Imaging, Vol: 41, Pages: 1961-1974, ISSN: 0278-0062

Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods.

Journal article

Liu J, Qin C, Yaghoobi M, 2022, High-Fidelity MRI Reconstruction with the Densely Connected Network Cascade and Feature Residual Data Consistency Priors, Pages: 34-43, ISSN: 0302-9743

Since its advent in the last century, magnetic resonance imaging (MRI) provides a radiation-free diagnosis tool and has revolutionized medical imaging. Compressed sensing (CS) methods leverage the sparsity prior of signals to reconstruct clean images from under-sampled measurements and accelerate the acquisition process. However, it is challenging to reduce strong aliasing artifacts caused by under-sampling and produce high-quality reconstructions with fine details. In this paper, we propose a novel GAN-based framework to recover the under-sampled images, which is characterized by a novel data consistency block and a densely connected network cascade used to improve the model performance in visual inspection and evaluation metrics. The role of each proposed block has been challenged in the ablation study, in terms of reconstruction quality metrics, using texture-rich FastMRI Knee image dataset.

Conference paper

Liu J, Qin C, Yaghoobi M, 2022, Region-Guided Channel-Wise Attention Network for Accelerated MRI Reconstruction, Pages: 21-31, ISSN: 0302-9743

Magnetic resonance imaging (MRI) has been widely used in clinical practice for medical diagnosis of diseases. However, the long acquisition time hinders its development in time-critical applications. In recent years, deep learning-based methods leverage the powerful representations of neural networks to recover high-quality MR images from undersampled measurements, which shortens the acquisition process and enables accelerated MRI scanning. Despite the achieved inspiring success, it is still challenging to provide high-fidelity reconstructions under high acceleration factors. As an important mechanism in deep neural networks, attention modules have been used to improve the reconstruction quality. Due to the computational costs, many attention modules are not suitable for applying to high-resolution features or to capture spatial information, which potentially limits the capacity of neural networks. To address this issue, we propose a novel channel-wise attention which is implemented under the guidance of implicitly learned spatial semantics. We incorporate the proposed attention module in a deep network cascade for fast MRI reconstruction. In experiments, we demonstrate that the proposed framework produces superior reconstructions with appealing local visual details, compared to other deep learning-based models, validated qualitatively and quantitatively on the FastMRI knee dataset.

Conference paper

Qin C, Rueckert D, 2022, Artificial Intelligence-Based Image Reconstruction in Cardiac Magnetic Resonance, Pages: 139-147, ISSN: 2626-6431

Journal article

Xia T, Sanchez P, Qin C, Tsaftaris SAet al., 2022, Adversarial counterfactual augmentation: application in Alzheimer's disease classification., Front Radiol, Vol: 2

Due to the limited availability of medical data, deep learning approaches for medical image analysis tend to generalise poorly to unseen data. Augmenting data during training with random transformations has been shown to help and became a ubiquitous technique for training neural networks. Here, we propose a novel adversarial counterfactual augmentation scheme that aims at finding the most effective synthesised images to improve downstream tasks, given a pre-trained generative model. Specifically, we construct an adversarial game where we update the input conditional factor of the generator and the downstream classifier with gradient backpropagation alternatively and iteratively. This can be viewed as finding the 'weakness' of the classifier and purposely forcing it to overcome its weakness via the generative model. To demonstrate the effectiveness of the proposed approach, we validate the method with the classification of Alzheimer's Disease (AD) as a downstream task. The pre-trained generative model synthesises brain images using age as conditional factor. Extensive experiments and ablation studies have been performed to show that the proposed approach improves classification performance and has potential to alleviate spurious correlations and catastrophic forgetting. Code: https://github.com/xiat0616/adversarial_counterfactual_augmentation.

Journal article

Qiu H, Hammernik K, Qin C, Chen C, Rueckert Det al., 2022, Embedding Gradient-Based Optimization in Image Registration Networks, MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VI, Vol: 13436, Pages: 56-65, ISSN: 0302-9743

Journal article

Qin C, Duan J, Hammernik K, Schlemper J, Kuestner T, Botnar R, Prieto C, Price AN, Hajnal J, Rueckert Det al., 2021, Complementary time-frequency domain networks for dynamic parallel MR image reconstruction, MAGNETIC RESONANCE IN MEDICINE, Vol: 86, Pages: 3274-3291, ISSN: 0740-3194

Journal article

Wang S, Qin C, Savioli N, Chen C, O'Regan D, Cook S, Guo Y, Rueckert D, Bai Wet al., 2021, Joint motion correction and super resolution for cardiac segmentationvia latent optimisation, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer, Pages: 14-24

In cardiac magnetic resonance (CMR) imaging, a 3D high-resolution segmentation of the heart is essential for detailed description of its anatomical structures. However, due to the limit of acquisition duration andrespiratory/cardiac motion, stacks of multi-slice 2D images are acquired inclinical routine. The segmentation of these images provides a low-resolution representation of cardiac anatomy, which may contain artefacts caused by motion. Here we propose a novel latent optimisation framework that jointly performs motion correction and super resolution for cardiac image segmentations. Given a low-resolution segmentation as input, the framework accounts for inter-slice motion in cardiac MR imaging and super-resolves the input into a high-resolution segmentation consistent with input. A multi-view loss is incorporated to leverage information from both short-axis view and long-axis view of cardiac imaging. To solve the inverse problem, iterative optimisation is performed in a latent space, which ensures the anatomical plausibility. This alleviates the need of paired low-resolution and high-resolution images for supervised learning. Experiments on two cardiac MR datasets show that the proposed framework achieves high performance, comparable to state-of-the-art super-resolution approaches and with better cross-domain generalisability and anatomical plausibility.

Conference paper

Chen C, Hammernik K, Ouyang C, Qin C, Bai W, Rueckert Det al., 2021, Cooperative training and latent space data augmentation for robust medical image segmentation, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)

Conference paper

Hammernik K, Schlemper J, Qin C, Duan J, Summers RM, Rueckert Det al., 2021, Systematic evaluation of iterative deep neural networks for fast parallel MRI reconstruction with sensitivity-weighted coil combination, MAGNETIC RESONANCE IN MEDICINE, Vol: 86, Pages: 1859-1872, ISSN: 0740-3194

Journal article

Qiu H, Qin C, Schuh A, Hammernik K, Rueckert Det al., 2021, Learning Diffeomorphic and Modality-invariant Registration using B-splines, Pages: 645-664

We present a deep learning (DL) registration framework for fast mono-modal and multimodal image registration using differentiable mutual information and diffeomorphic B-spline free-form deformation (FFD). Deep learning registration has been shown to achieve competitive accuracy and significant speedups from traditional iterative registration methods. In this paper, we propose to use a B-spline FFD parameterisation of Stationary Velocity Field (SVF) to in DL registration in order to achieve smooth diffeomorphic deformation while being computationally-efficient. In contrast to most DL registration methods which use intensity similarity metrics that assume linear intensity relationship, we apply a differentiable variant of a classic similarity metric, mutual information, to achieve robust mono-modal and multi-modal registration. We carefully evaluated our proposed framework on mono- and multi-modal registration using 3D brain MR images and 2D cardiac MR images.

Conference paper

Li S, Xie M, Lv F, Liu CH, Liang J, Qin C, Li Wet al., 2021, Semantic Concentration for Domain Adaptation, Pages: 9082-9091, ISSN: 1550-5499

Domain adaptation (DA) paves the way for label annotation and dataset bias issues by the knowledge transfer from a label-rich source domain to a related but unlabeled target domain. A mainstream of DA methods is to align the feature distributions of the two domains. However, the majority of them focus on the entire image features where irrelevant semantic information, e.g., the messy background, is inevitably embedded. Enforcing feature alignments in such case will negatively influence the correct matching of objects and consequently lead to the semantically negative transfer due to the confusion of irrelevant semantics. To tackle this issue, we propose Semantic Concentration for Domain Adaptation (SCDA), which encourages the model to concentrate on the most principal features via the pair-wise adversarial alignment of prediction distributions. Specifically, we train the classifier to class-wisely maximize the prediction distribution divergence of each sample pair, which enables the model to find the region with large differences among the same class of samples. Meanwhile, the feature extractor attempts to minimize that discrepancy, which suppresses the features of dissimilar regions among the same class of samples and accentuates the features of principal parts. As a general method, SCDA can be easily integrated into various DA methods as a regularizer to further boost their performance. Extensive experiments on the cross-domain benchmarks show the efficacy of SCDA.

Conference paper

Li S, Lv F, Xie B, Liu CH, Liang J, Qin Cet al., 2021, Bi-Classifier Determinacy Maximization for Unsupervised Domain Adaptation, Pages: 8455-8464

Unsupervised domain adaptation challenges the problem of transferring knowledge from a well-labelled source domain to an unlabelled target domain. Recently, adversarial learning with bi-classifier has been proven effective in pushing cross-domain distributions close. Prior approaches typically leverage the disagreement between bi-classifier to learn transferable representations, however, they often neglect the classifier determinacy in the target domain, which could result in a lack of feature discriminability. In this paper, we present a simple yet effective method, namely Bi-Classifier Determinacy Maximization (BCDM), to tackle this problem. Motivated by the observation that target samples cannot always be separated distinctly by the decision boundary, here in the proposed BCDM, we design a novel classifier determinacy disparity (CDD) metric, which formulates classifier discrepancy as the class relevance of distinct target predictions and implicitly introduces constraint on the target feature discriminability. To this end, the BCDM can generate discriminative representations by encouraging target predictive outputs to be consistent and determined, meanwhile, preserve the diversity of predictions in an adversarial manner. Furthermore, the properties of CDD as well as the theoretical guarantees of BCDM’s generalization bound are both elaborated. Extensive experiments show that BCDM compares favorably against the existing state-of-the-art domain adaptation methods.

Conference paper

Johnson PM, Jeong G, Hammernik K, Schlemper J, Qin C, Duan J, Rueckert D, Lee J, Pezzotti N, De Weerdt E, Yousefi S, Elmahdy MS, Van Gemert JHF, Schuelke C, Doneva M, Nielsen T, Kastryulin S, Lelieveldt BPF, Van Osch MJP, Staring M, Chen EZ, Wang P, Chen X, Chen T, Patel VM, Sun S, Shin H, Jun Y, Eo T, Kim S, Kim T, Hwang D, Putzky P, Karkalousos D, Teuwen J, Miriakov N, Bakker B, Caan M, Welling M, Muckley MJ, Knoll Fet al., 2021, Evaluation of the Robustness of Learned MR Image Reconstruction to Systematic Deviations Between Training and Test Data for the Models from the fastMRI Challenge, 4th International Workshop on Machine Learning for Medical Reconstruction (MLMIR) held as part of the e 24th Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 25-34, ISSN: 0302-9743

Conference paper

Wang S, Tarroni G, Qin C, Mo Y, Dai C, Chen C, Glocker B, Guo Y, Rueckert D, Bai Wet al., 2020, Deep generative model-based quality control for cardiac MRI segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 88-97, ISSN: 0302-9743

In recent years, convolutional neural networks have demonstrated promising performance in a variety of medical image segmentation tasks. However, when a trained segmentation model is deployed into the real clinical world, the model may not perform optimally. A major challenge is the potential poor-quality segmentations generated due to degraded image quality or domain shift issues. There is a timely need to develop an automated quality control method that can detect poor segmentations and feedback to clinicians. Here we propose a novel deep generative model-based framework for quality control of cardiac MRI segmentation. It first learns a manifold of good-quality image-segmentation pairs using a generative model. The quality of a given test segmentation is then assessed by evaluating the difference from its projection onto the good-quality manifold. In particular, the projection is refined through iterative search in the latent space. The proposed method achieves high prediction accuracy on two publicly available cardiac MRI datasets. Moreover, it shows better generalisation ability than traditional regression-based methods. Our approach provides a real-time and model-agnostic quality control for cardiac MRI segmentation, which has the potential to be integrated into clinical image analysis workflows.

Conference paper

Qin C, Wang S, Chen C, Qiu H, Bai W, Rueckert Det al., 2020, Biomechanics-informed neural networks for myocardial motion tracking in MRI, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: Springer International Publishing, Pages: 296-306, ISSN: 0302-9743

Image registration is an ill-posed inverse problem which often requires regularisation on the solution space. In contrast to most of the current approaches which impose explicit regularisation terms such as smoothness, in this paper we propose a novel method that can implicitly learn biomechanics-informed regularisation. Such an approach can incorporate application-specific prior knowledge into deep learning based registration. Particularly, the proposed biomechanics-informed regularisation leverages a variational autoencoder (VAE) to learn a manifold for biomechanically plausible deformations and to implicitly capture their underlying properties via reconstructing biomechanical simulations. The learnt VAE regulariser then can be coupled with any deep learning based registration network to regularise the solution space to be biomechanically plausible. The proposed method is validated in the context of myocardial motion tracking on 2D stacks of cardiac MRI data from two different datasets. The results show that it can achieve better performance against other competing methods in terms of motion tracking accuracy and has the ability to learn biomechanical properties such as incompressibility and strains. The method has also been shown to have better generalisability to unseen domains compared with commonly used L2 regularisation schemes.

Conference paper

Chen C, Qin C, Qiu H, Ouyang C, Wang S, Chen L, Tarroni G, Bai W, Rueckert Det al., 2020, Realistic adversarial data augmentation for MR image segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)

Neural network-based approaches can achieve high accuracy in various medicalimage segmentation tasks. However, they generally require large labelleddatasets for supervised learning. Acquiring and manually labelling a largemedical dataset is expensive and sometimes impractical due to data sharing andprivacy issues. In this work, we propose an adversarial data augmentationmethod for training neural networks for medical image segmentation. Instead ofgenerating pixel-wise adversarial attacks, our model generates plausible andrealistic signal corruptions, which models the intensity inhomogeneities causedby a common type of artefacts in MR imaging: bias field. The proposed methoddoes not rely on generative networks, and can be used as a plug-in module forgeneral segmentation networks in both supervised and semi-supervised learning.Using cardiac MR imaging we show that such an approach can improve thegeneralization ability and robustness of models as well as provide significantimprovements in low-data scenarios.

Conference paper

Chen C, Qin C, Qiu H, Tarroni G, Duan J, Bai W, Rueckert Det al., 2020, Deep learning for cardiac image segmentation: A review, Frontiers in Cardiovascular Medicine, Vol: 7, Pages: 1-33, ISSN: 2297-055X

Deep learning has become the most widely used approach for cardiac imagesegmentation in recent years. In this paper, we provide a review of over 100cardiac image segmentation papers using deep learning, which covers commonimaging modalities including magnetic resonance imaging (MRI), computedtomography (CT), and ultrasound (US) and major anatomical structures ofinterest (ventricles, atria and vessels). In addition, a summary of publiclyavailable cardiac image datasets and code repositories are included to providea base for encouraging reproducible research. Finally, we discuss thechallenges and limitations with current deep learning-based approaches(scarcity of labels, model generalizability across different domains,interpretability) and suggest potential directions for future research.

Journal article

Lu P, Qiu H, Qin C, Bai W, Rueckert D, Noble JAet al., 2020, Going Deeper into Cardiac Motion Analysis to Model Fine Spatio-Temporal Features, 24th Conference on Medical Image Understanding and Analysis (MIUA), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 294-306, ISSN: 1865-0929

Conference paper

Qiu H, Qin C, Le Folgoc L, Hou B, Schlemper J, Rueckert Det al., 2020, Deep Learning for Cardiac Motion Estimation: Supervised vs. Unsupervised Training, 10th International Workshop on Statistical Atlases and Computational Modelling of the Heart (STACOM), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 186-194, ISSN: 0302-9743

Conference paper

Duan J, Schlemper J, Qin C, Ouyang C, Bai W, Biffi C, Bello G, Statton B, O’Regan DP, Rueckert Det al., 2019, VS-Net: variable splitting network for accelerated parallel MRI reconstruction, International Conference on Medical Image Computing and Computer-Assisted Intervention, Publisher: Springer International Publishing, Pages: 713-722, ISSN: 0302-9743

In this work, we propose a deep learning approach for parallel magnetic resonance imaging (MRI) reconstruction, termed a variable splitting network (VS-Net), for an efficient, high-quality reconstruction of undersampled multi-coil MR data. We formulate the generalized parallel compressed sensing reconstruction as an energy minimization problem, for which a variable splitting optimization method is derived. Based on this formulation we propose a novel, end-to-end trainable deep neural network architecture by unrolling the resulting iterative process of such variable splitting scheme. VS-Net is evaluated on complex valued multi-coil knee images for 4-fold and 6-fold acceleration factors. We show that VS-Net outperforms state-of-the-art deep learning reconstruction algorithms, in terms of reconstruction accuracy and perceptual quality. Our code is publicly available at https://github.com/j-duan/VS-Net.

Conference paper

Qin C, Hajnal JV, Rueckert D, Schlemper J, Caballero J, Price ANet al., 2019, Convolutional recurrent neural networks for dynamic MR image reconstruction, IEEE Transactions on Medical Imaging, Vol: 38, Pages: 280-290, ISSN: 0278-0062

Accelerating the data acquisition of dynamic magnetic resonance imaging (MRI) leads to a challenging ill-posed inverse problem, which has received great interest from both the signal processing and machine learning communities over the last decades. The key ingredient to the problem is how to exploit the temporal correlations of the MR sequence to resolve aliasing artefacts. Traditionally, such observation led to a formulation of an optimisation problem, which was solved using iterative algorithms. Recently, however, deep learning based-approaches have gained significant popularity due to their ability to solve general inverse problems. In this work, we propose a unique, novel convolutional recurrent neural network (CRNN) architecture which reconstructs high quality cardiac MR images from highly undersampled k-space data by jointly exploiting the dependencies of the temporal sequences as well as the iterative nature of the traditional optimisation algorithms. In particular, the proposed architecture embeds the structure of the traditional iterative algorithms, efficiently modelling the recurrence of the iterative reconstruction stages by using recurrent hidden connections over such iterations. In addition, spatio-temporal dependencies are simultaneously learnt by exploiting bidirectional recurrent hidden connections across time sequences. The proposed method is able to learn both the temporal dependency and the iterative reconstruction process effectively with only a very small number of parameters, while outperforming current MR reconstruction methods in terms of reconstruction accuracy and speed.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=01122267&limit=30&person=true