Imperial College London

DrGuangYang

Faculty of EngineeringDepartment of Bioengineering

Senior Lecturer
 
 
 
//

Contact

 

g.yang Website

 
 
//

Location

 

229Sir Michael Uren HubWhite City Campus

//

Summary

 

Publications

Publication Type
Year
to

272 results found

Huang J, Xing X, Gao Z, Yang Get al., 2022, Swin Deformable Attention U-Net Transformer (SDAUT) for Explainable Fast MRI, MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VI, Vol: 13436, Pages: 538-548, ISSN: 0302-9743

Journal article

Xing X, Huang J, Nan Y, Wu Y, Wang C, Gao Z, Walsh S, Yang Get al., 2022, CS<SUP>2</SUP>: A Controllable and Simultaneous Synthesizer of Images and Annotations with Minimal Human Intervention, MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT VIII, Vol: 13438, Pages: 3-12, ISSN: 0302-9743

Journal article

Tanzer M, Yook SH, Ferreira P, Yang G, Rueckert D, Nielles-Vallespin Set al., 2022, Review of Data Types and Model Dimensionality for Cardiac DTI SMS-Related Artefact Removal, STATISTICAL ATLASES AND COMPUTATIONAL MODELS OF THE HEART: REGULAR AND CMRXMOTION CHALLENGE PAPERS, STACOM 2022, Vol: 13593, Pages: 123-132, ISSN: 0302-9743

Journal article

Liu Y, Miao Q, Surawech C, Zheng H, Nguyen D, Yang G, Raman S, Sung Ket al., 2021, Deep learning enables prostate mri segmentation: a large cohort evaluation with inter-rater variability analysis, Frontiers in Oncology, Vol: 11, ISSN: 2234-943X

Whole-prostate gland (WPG) segmentation plays a significant role in prostate volume measurement, treatment, and biopsy planning. This study evaluated a previously developed automatic WPG segmentation, deep attentive neural network (DANN), on a large, continuous patient cohort to test its feasibility in a clinical setting. With IRB approval and HIPAA compliance, the study cohortincluded 3,698 3T MRI scans acquired between 2016 and 2020. In total, 335 MRI scans were used to train the model, and 3,210 and 100 were used to conduct the qualitative and quantitative evaluation of the model. In addition, the DANN-enabled prostate volume estimation was evaluated by using 50 MRI scans in comparison with manual prostate volume estimation. For qualitative evaluation, visual grading was used to evaluate the performance of WPG segmentation by two abdominal radiologists, and DANN demonstrated either acceptable or excellent performance in over 96% of the testing cohort on the WPG or each prostate sub-portion (apex, midgland, or base). Two radiologists reached a substantial agreement on WPG and midgland segmentation (κ=0.75 and 0.63) and moderate agreement on apex and base segmentation (κ=0.56 and 0.60). For quantitative evaluation, DANN demonstrated a dice similarity coefficient of 0.93±0.02, significantly higher than other baseline methods, such as Deeplab v3+ and UNet (both p values <0.05). For the volume measurement, 96% of the evaluation cohort achieved differences between the DANN-enabled and manual volume measurement within 95% limits of agreement. In conclusion, the study showed that the DANN achieved sufficient and consistent WPG segmentation on a large, continuous study cohort, demonstrating its great potential to serve as a tool to measure prostate volume.

Journal article

Astaraki M, Yang G, Zakko Y, Toma-Dasu L, Smedby Ö, Wang Cet al., 2021, A comparative study of radiomics and deep-learning based methods for pulmonary nodule malignancy prediction in low dose CT images, Frontiers in Oncology, Vol: 11, ISSN: 2234-943X

Objectives: Both radiomics and deep learning methods have shown great promise in predicting lesion malignancy in variousimage-based oncology studies. However, it is still unclear which method to choose for a specific clinical problem given the access tothe same amount of training data. In this study, we try to compare the performance of a series of carefully selected conventionalradiomics methods, end-to-end deep learning models, and deep-feature based radiomics pipelines for pulmonary nodule malignancyprediction on an open database that consists of 1297 manually delineated lung nodules.Methods: Conventional radiomics analysis was conducted by extracting standard handcrafted features from target nodule images.Several end-to-end deep classifier networks, including VGG, ResNet, DenseNet, and EfficientNet were employed to identify lungnodule malignancy as well. In addition to the baseline implementations, we also investigated the importance of feature selectionand class balancing, as well as separating the features learned in the nodule target region and the background/context region. Bypooling the radiomics and deep features together in a hybrid feature set, we investigated the compatibility of these two setswith respect to malignancy prediction.Results: The best baseline conventional radiomics model, deep learning model, and deep-feature based radiomics model achievedAUROC values (mean±standard deviations) of 0.792±0.025, 0.801±0.018, and 0.817±0.032, respectively through 5-fold crossvalidation analyses. However, after trying out several optimization techniques, such as feature selection and data balancing, aswell as adding context features, the corresponding best radiomics, end-to-end deep learning, and deep-feature based modelsachieved AUROC values of 0.921±0.010, 0.824±0.021, and 0.936±0.011, respectively. We achieved the best prediction accuracy fromthe hybrid feature set (AUROC: 0.938±0.010).Conclusion: The

Journal article

Zhang W, Yang G, Huang H, Yang W, Xu X, Liu Y, Lai Xet al., 2021, ME-Net: Multi-encoder net framework for brain tumor segmentation, INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, Vol: 31, Pages: 1834-1848, ISSN: 0899-9457

Journal article

Long J, Sun D, Zhou X, Huang X, Hu J, Xia J, Yang Get al., 2021, A mathematical model for predicting intracranial pressure based on noninvasively acquired PC-MRI parameters in communicating hydrocephalus, JOURNAL OF CLINICAL MONITORING AND COMPUTING, Vol: 35, Pages: 1325-1332, ISSN: 1387-1307

Journal article

Huang Y, Dumeny L, Yang G, Lteif C, Arwood MJ, McDonough CW, Desai AA, Cavallari LH, Duarte JDet al., 2021, Genome-Wide Association Study Identifies Polymorphisms Associated with Heart Failure Mortality in a Diverse Patient Population, Annual Scientific Sessions of the American-Heart-Association / Resuscitation Science Symposium, Publisher: LIPPINCOTT WILLIAMS & WILKINS, ISSN: 0009-7322

Conference paper

Yang G, Liu T, Zhou Z, Bo K, Gao Y, Wang H, Wang R, Liu W, Chang S, Liu Y, Sun Y, Firmin D, Yang G, Dong J, Xu Let al., 2021, Association between left ventricular global function index and outcomes in patients with dilated cardiomyopathy, Frontiers in Cardiovascular Medicine, Vol: -, ISSN: 2297-055X

Purpose: Left ventricular global function index (LVGFI) assessed using cardiac magnetic resonance (CMR) seems promising in the prediction of clinical outcomes. However, the role of the LVGFI is uncertain in patients with heart failure (HF) with dilated cardiomyopathy (DCM). To describe the association of LVGFI and outcomes in patients with DCM, it was hypothesized that LVGFI is associated with decreased major adverse cardiac events (MACEs) in patients with DCM. Materials and Methods: This prospective cohort study was conducted from January 2015 to April 2020 in consecutive patients with DCM who underwent CMR. The association between outcomes and LVGFI was assessed using a multivariable model adjusted with confounders. LVGFI was the primary exposure variable. The long-term outcome was a composite endpoint, including death or heart transplantation. Results: A total of 334 patients (mean age: 55 years) were included in this study. The average of CMR-LVGFI was 16.53%. Over a median follow-up of 565 days, 43 patients reached the composite endpoint. Kaplan–Meier analysis revealed that patients with LVGFI lower than the cutoff values (15.73%) had a higher estimated cumulative incidence of the endpoint compared to those with LVGFI higher than the cutoff values (P=0.0021). The hazard of MACEs decreased by 38% for each 1 SD increase in LVGFI (hazard ratio 0.62[95%CI 0.43-0.91]) and after adjustment by 46% (HR 0.54 [95%CI 0.32-0.89]). The association was consistent across subgroup analyses.Conclusion: In this study, an increase in CMR-LVGFI was associated with decreasing the long-term risk of MACEs with DCM after adjustment for traditional confounders.

Journal article

Liu Y, Zheng H, Liang Z, Miao Q, Brisbane W, Marks L, Raman S, Reiter R, Yang G, Sung Ket al., 2021, Textured-based deep learning in prostate cancer classification with 3T multiparametric MRI: comparison with PI-RADS-based classification, Diagnostics, Vol: 11, Pages: 1-14, ISSN: 2075-4418

The current standardized scheme for interpreting MRI requires a high level of expertise and exhibits a significant degree of inter-reader and intra-reader variability. An automated prostate cancer (PCa) classification can improve the ability of MRI to assess the spectrum of PCa. The purpose of the study was to evaluate the performance of a texture-based deep learning model (Textured-DL) for differentiating between clinically significant PCa (csPCa) and non-csPCa and to compare the Textured-DL with Prostate Imaging Reporting and Data System (PI-RADS)-based classification (PI-RADS-CLA), where a threshold of PI-RADS ≥ 4, representing highly suspicious lesions for csPCa, was applied. The study cohort included 402 patients (60% (n = 239) of patients for training, 10% (n = 42) for validation, and 30% (n = 121) for testing) with 3T multiparametric MRI matched with whole-mount histopathology after radical prostatectomy. For a given suspicious prostate lesion, the volumetric patches of T2-Weighted MRI and apparent diffusion coefficient images were cropped and used as the input to Textured-DL, consisting of a 3D gray-level co-occurrence matrix extractor and a CNN. PI-RADS-CLA by an expert reader served as a baseline to compare classification performance with Textured-DL in differentiating csPCa from non-csPCa. Sensitivity and specificity comparisons were performed using Mcnemar’s test. Bootstrapping with 1000 samples was performed to estimate the 95% confidence interval (CI) for AUC. CIs of sensitivity and specificity were calculated by the Wald method. The Textured-DL model achieved an AUC of 0.85 (CI [0.79, 0.91]), which was significantly higher than the PI-RADS-CLA (AUC of 0.73 (CI [0.65, 0.80]); p < 0.05) for PCa classification, and the specificity was significantly different between Textured-DL and PI-RADS-CLA (0.70 (CI [0.59, 0.82]) vs. 0.47 (CI [0.35, 0.59]); p < 0.05). In sub-analyses, Textured-DL demonstrated significantly higher specificities in the p

Journal article

Jiang M, Zhi M, Wei L, Yang X, Zhang J, Li Y, Wang P, Huang J, Yang Get al., 2021, FA-GAN: fused attentive generative adversarial networks for MRI image super-resolution, Computerized Medical Imaging and Graphics, Vol: 92, Pages: 1-11, ISSN: 0895-6111

High-resolution magnetic resonance images can provide fine-grained anatomicalinformation, but acquiring such data requires a long scanning time. In this paper, aframework called the Fused Attentive Generative Adversarial Networks(FA-GAN) isproposed to generate the super- resolution MR image from low-resolution magneticresonance images, which can reduce the scanning time effectively but with highresolution MR images. In the framework of the FA-GAN, the local fusion featureblock, consisting of different three-pass networks by using different convolutionkernels, is proposed to extract image features at different scales. And the globalfeature fusion module, including the channel attention module, the self-attentionmodule, and the fusion operation,is designed to enhance the important features of theMR image. Moreover, the spectral normalization process is introduced to make thediscriminator network stable. 40 sets of 3D magnetic resonance images (each set ofimages contains 256 slices) are used to train the network, and 10 sets of images areused to test the proposed method. The experimental results show that the PSNR andSSIM values of the super-resolution magnetic resonance image generated by theproposed FA-GAN method are higher than the state-of-the-art reconstructionmethods.

Journal article

Wu Y, Tang Z, Li B, Firmin D, Yang Get al., 2021, Recent advances in fibrosis and scar segmentation from cardiac MRI: A state-of-the-art review and future perspectives, Frontiers in Physiology, Vol: 12, Pages: 1-23, ISSN: 1664-042X

Segmentation of cardiac fibrosis and scars is essential for clinical diagnosis and can provide invaluable guidance for the treatment of cardiac diseases. Late Gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) has been successful in guiding the clinical diagnosis and treatment reliably. For LGE CMR, many methods have demonstrated success in accurately segmenting scarring regions. Co-registration with other non-contrast-agent (non-CA) modalities [e.g., balanced steady-state free precession (bSSFP) cine magnetic resonance imaging (MRI)] can further enhance the efficacy of automated segmentation of cardiac anatomies. Many conventional methods have been proposed to provide automated or semi-automated segmentation of scars. With the development of deep learning in recent years, we can also see more advanced methods that are more efficient in providing more accurate segmentations. This paper conducts a state-of-the-art review of conventional and current state-of-the-art approaches utilizing different modalities for accurate cardiac fibrosis and scar segmentation.

Journal article

Li G, Lv J, Tong X, Wang C, Yang Get al., 2021, High-resolution pelvic MRI reconstruction using a generative adversarial network with attention and cyclic loss, IEEE Access, Vol: 9, Pages: 105951-105964, ISSN: 2169-3536

Magnetic resonance imaging (MRI) is an important medical imaging modality, but its acquisition speed is quite slow due to the physiological limitations. Recently, super-resolution methods have shown excellent performance in accelerating MRI. In some circumstances, it is difficult to obtain high-resolution images even with prolonged scan time. Therefore, we proposed a novel super-resolution method that uses a generative adversarial network with cyclic loss and attention mechanism to generate high-resolution MR images from low-resolution MR images by upsampling factors of 2× and 4× . We implemented our model on pelvic images from healthy subjects as training and validation data, while those data from patients were used for testing. The MR dataset was obtained using different imaging sequences, including T2, T2W SPAIR, and mDIXON-W. Four methods, i.e., BICUBIC, SRCNN, SRGAN, and EDSR were used for comparison. Structural similarity, peak signal to noise ratio, root mean square error, and variance inflation factor were used as calculation indicators to evaluate the performances of the proposed method. Various experimental results showed that our method can better restore the details of the high-resolution MR image as compared to the other methods. In addition, the reconstructed high-resolution MR image can provide better lesion textures in the tumor patients, which is promising to be used in clinical diagnosis.

Journal article

Wang C, Yang G, Papanastasiou G, Zhang H, Rodrigues JJPC, de Albuquerque VHCet al., 2021, Industrial Cyber-Physical Systems-Based Cloud IoT Edge for Federated Heterogeneous Distillation, IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, Vol: 17, Pages: 5511-5521, ISSN: 1551-3203

Journal article

Zhu J, Tan C, Yang J, Yang G, Lio' Pet al., 2021, Arbitrary scale super-resolution for medical images., International Journal of Neural Systems, Vol: 31, Pages: 1-20, ISSN: 0129-0657

Single image super-resolution (SISR) aims to obtain a high-resolution output from one low-resolution image. Currently, deep learning-based SISR approaches have been widely discussed in medical image processing, because of their potential to achieve high-quality, high spatial resolution images without the cost of additional scans. However, most existing methods are designed for scale-specific SR tasks and are unable to generalize over magnification scales. In this paper, we propose an approach for medical image arbitrary-scale super-resolution (MIASSR), in which we couple meta-learning with generative adversarial networks (GANs) to super-resolve medical images at any scale of magnification in [Formula: see text]. Compared to state-of-the-art SISR algorithms on single-modal magnetic resonance (MR) brain images (OASIS-brains) and multi-modal MR brain images (BraTS), MIASSR achieves comparable fidelity performance and the best perceptual quality with the smallest model size. We also employ transfer learning to enable MIASSR to tackle SR tasks of new medical modalities, such as cardiac MR images (ACDC) and chest computed tomography images (COVID-CT). The source code of our work is also public. Thus, MIASSR has the potential to become a new foundational pre-/post-processing step in clinical image analysis tasks such as reconstruction, image quality enhancement, and segmentation.

Journal article

Liu T, Gao Y, Wang H, Zhou Z, Wang R, Chang S, Liu Y, Sun Y, Rui H, Yang G, Firmin D, Dong J, Xu Let al., 2021, Association between right ventricular strain and outcomes in patients with dilated cardiomyopathy, Heart, Vol: 107, Pages: 1233-1239, ISSN: 1355-6037

Objective To explore the association between three-dimensional (3D) cardiac magnetic resonance (CMR) feature tracking (FT) right ventricular peak global longitudinal strain (RVpGLS) and major adverse cardiovascular events (MACEs) in patients with stage C or D heart failure (HF) with non-ischaemic dilated cardiomyopathy (NIDCM) but without atrial fibrillation (AF).Methods Patients with dilated cardiomyopathy were enrolled in this prospective cohort study. Comprehensive clinical and biochemical analysis and CMR imaging were performed. All patients were followed up for MACEs.Results A total of 192 patients (age 53±14 years) were eligible for this study. A combination of cardiovascular death and cardiac transplantation occurred in 18 subjects during the median follow-up of 567 (311, 920) days. Brain natriuretic peptide, creatinine, left ventricular (LV) end-diastolic volume, LV end-systolic volume, right ventricular (RV) end-diastolic volume and RVpGLS from CMR were associated with the outcomes. The multivariate Cox regression model adjusting for traditional risk factors and CMR variables detected a significant association between RVpGLS and MACEs in patients with stage C or D HF with NIDCM without AF. Kaplan-Meier analysis based on RVpGLS cut-off value revealed that patients with RVpGLS <−8.5% showed more favourable clinical outcomes than those with RVpGLS ≥−8.5% (p=0.0037). Subanalysis found that this association remained unchanged.Conclusions RVpGLS-derived from 3D CMR FT is associated with a significant prognostic impact in patients with NIDCM with stage C or D HF and without AF.

Journal article

Zhang W, Yang G, Zhang N, Xu L, Wang X, Zhang Y, Zhang H, Del Ser J, de Albuquerque VHCet al., 2021, Multi-task learning with Multi-view Weighted Fusion Attention for artery-specific calcification analysis, Information Fusion, Vol: 71, Pages: 64-76, ISSN: 1566-2535

In general, artery-specific calcification analysis comprises the simultaneous calcification segmentation and quantification tasks. It can help provide a thorough assessment for calcification of different coronary arteries, and further allow for an efficient and rapid diagnosis of cardiovascular diseases (CVD). However, as a high-dimensional multi-type estimation problem, artery-specific calcification analysis has not been profoundly investigated due to the intractability of obtaining discriminative feature representations. In this work, we propose a Multi-task learning network with Multi-view Weighted Fusion Attention (MMWFAnet) to solve this challenging problem. The MMWFAnet first employs a Multi-view Weighted Fusion Attention (MWFA) module to extract discriminative feature representations by enhancing the collaboration of multiple views. Specifically, MWFA weights these views to improve multi-view learning for calcification features. Based on the fusion of these multiple views, the proposed approach takes advantage of multi-task learning to obtain accurate segmentation and quantification of artery-specific calcification simultaneously. We perform experimental studies on 676 non-contrast Computed Tomography scans, achieving state-of-the-art performance in terms of multiple evaluation metrics. These compelling results evince that the proposed MMWFAnet is capable of improving the effectivity and efficiency of clinical CVD diagnosis.

Journal article

Lv J, Li G, Tong X, Chen W, Huang J, Wang C, Yang Get al., 2021, Transfer learning enhanced generative adversarial networks for multi-channel MRI reconstruction, Computers in Biology and Medicine, Vol: 134, Pages: 1-15, ISSN: 0010-4825

Deep learning based generative adversarial networks (GAN) can effectively perform image reconstruction with under-sampled MR data. In general, a large number of training samples are required to improve the reconstruction performance of a certain model. However, in real clinical applications, it is difficult to obtain tens of thousands of raw patient data to train the model since saving k-space data is not in the routine clinical flow. Therefore, enhancing the generalizability of a network based on small samples is urgently needed. In this study, three novel applications were explored based on parallel imaging combined with the GAN model (PI-GAN) and transfer learning. The model was pre-trained with public Calgary brain images and then fine-tuned for use in (1) patients with tumors in our center; (2) different anatomies, including knee and liver; (3) different k-space sampling masks with acceleration factors (AFs) of 2 and 6. As for the brain tumor dataset, the transfer learning results could remove the artifacts found in PI-GAN and yield smoother brain edges. The transfer learning results for the knee and liver were superior to those of the PI-GAN model trained with its own dataset using a smaller number of training cases. However, the learning procedure converged more slowly in the knee datasets compared to the learning in the brain tumor datasets. The reconstruction performance was improved by transfer learning both in the models with AFs of 2 and 6. Of these two models, the one with AF = 2 showed better results. The results also showed that transfer learning with the pre-trained model could solve the problem of inconsistency between the training and test datasets and facilitate generalization to unseen data.

Journal article

Driggs D, Selby I, Roberts M, Gkrania-Klotsas E, Rudd JHF, Yang G, Babar J, Sala E, Schonlieb C-Bet al., 2021, Machine Learning for COVID-19 Diagnosis and Prognostication: Lessons for Amplifying the Signal While Reducing the Noise, RADIOLOGY-ARTIFICIAL INTELLIGENCE, Vol: 3, ISSN: 2638-6100

Journal article

Lv J, Zhu J, Yang G, 2021, Which GAN? A comparative study of generative adversarial network (GAN) based fast MRI reconstruction, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol: 379, Pages: 1-17, ISSN: 1364-503X

Fast magnetic resonance imaging (MRI) is crucial for clinical applications that can alleviate motion artefacts and increase patient throughput. K-space undersampling is an obvious approach to accelerate MR acquisition. However, undersampling of k-space data can result in blurring and aliasing artefacts for the reconstructed images. Recently, several studies have been proposed to use deep learning based data-driven models for MRI reconstruction and have obtained promising results. However, the comparison of these methods remains limited because the models have not been trained on the same datasets and the validation strategies may bedifferent. The purpose of this work is to conduct a comparative study to investigate the generative adversarial network (GAN) based models for MRI reconstruction. We reimplemented and benchmarked four widely used GAN based architectures including DAGAN, ReconGAN, RefineGAN and KIGAN. These four frameworks were trained and tested on brain, knee and liver MRI images using 2, 4 and 6- fold accelerations with a random undersampling mask. Both quantitative evaluations and qualitative visualisation have shown that the RefineGAN method has achieved superior performance in reconstruction with better accuracy and perceptual quality compared to other GAN based methods.

Journal article

Ma H, Ye Q, Ding W, Jiang Y, Wang M, Niu Z, Zhou X, Gao Y, Wang C, Menpes-Smith W, Fang EF, Shao J, Xia J, Yang Get al., 2021, Can clinical symptoms and laboratory results predict CT abnormality? Initial findings using novel machine learning techniques in children with COVID-19 infections, Frontiers in Medicine, Vol: 8, Pages: 1-10, ISSN: 2296-858X

The rapid spread of coronavirus 2019 disease (COVID-19) has manifested a global public health crisis, and chest CT has been proven to be a powerful tool for screening, triage, evaluation and prognosis in COVID-19 patients. However, CT is not only costly but also associated with an increased incidence of cancer, in particular for children. This study will question whether clinical symptoms and laboratory results can predict the CT outcomes for the pediatric patients with positive RT-PCR testing results in order to determine the necessity of CT for such a vulnerable group. Clinical data were collected from 244 consecutive pediatric patients (16 years of age and under) treated at Wuhan Children’s Hospital with positive RT-PCR testing, and the chest CT were performed within three days of clinical data collection, from January 21 to March 8, 2020. This study was approved by the local ethics committee of Wuhan Children’s Hospital. Advanced decision tree based machine learning models were developed for the prediction of CT outcomes. Results have shown that age, lymphocyte, neutrophils, ferritin and C-reactive protein are the most related clinical indicators for predicting CT outcomes for pediatric patients with positive RT-PCR testing. Our decision support system has managed to achieve an AUC of 0.84 with 0.82 accuracy and 0.84 sensitivity for predicting CT outcomes. Our model can effectively predict CT outcomes, and our findings have indicated that the use of CT should be reconsidered for pediatric patients, as it may not be indispensable.

Journal article

Huang H, Yang G, Zhang W, Xu X, Yang W, Jiang W, Lai Xet al., 2021, A deep multi-task learning framework for brain tumor segmentation, Frontiers in Oncology, Vol: 11, ISSN: 2234-943X

Glioma is the most common primary central nervous system tumor, accounting for about halfof all intracranial primary tumors. As a non-invasive examination method, MRI has anextremely important guiding role in the clinical intervention of tumors. However, manuallysegmenting brain tumors from MRI requires a lot of time and energy for doctors, which affectsthe implementation of follow-up diagnosis and treatment plans. With the development of deeplearning, medical image segmentation is gradually automated. However, brain tumors areeasily confused with strokes and serious imbalances between classes make brain tumorsegmentation one of the most difficult tasks in MRI segmentation. In order to solve theseproblems, we propose a deep multi-task learning framework and integrate a multi-depth fusionmodule in the framework to accurately segment brain tumors. In this framework, we haveadded a distance transform decoder based on the V-Net, which can make the segmentationcontour generated by the mask decoder more accurate and reduce the generation of roughboundaries. In order to combine the different tasks of the two decoders, we weighted and addedtheir corresponding loss functions, where the distance map prediction regularized the maskprediction. At the same time, the multi-depth fusion module in the encoder can enhance theability of the network to extract features. The accuracy of the model will be evaluated onlineusing the multispectral MRI records of the BraTS 2018, BraTS 2019, and BraTS 2020 datasets.This method obtains high-quality segmentation results, and the average Dice is as high as 78%.The experimental results show that this model has great potential in segmenting brain tumorsautomatically and accurately.

Journal article

Yang G, Zhang H, Firmin D, Li Set al., 2021, Recent advances in artificial intelligence for cardiac imaging, COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, Vol: 90, ISSN: 0895-6111

Journal article

Kuang M, Wu Y, Alonso-Álvarez D, Firmin D, Keegan J, Gatehouse P, Yang Get al., 2021, Three-dimensional embedded attentive RNN (3D-EAR) segmentor for leftventricle delineation from myocardial velocity mapping, Publisher: arXiv

Myocardial Velocity Mapping Cardiac MR (MVM-CMR) can be used to measureglobal and regional myocardial velocities with proved reproducibility. Accurateleft ventricle delineation is a prerequisite for robust and reproduciblemyocardial velocity estimation. Conventional manual segmentation on thisdataset can be time-consuming and subjective, and an effective fully automateddelineation method is highly in demand. By leveraging recently proposed deeplearning-based semantic segmentation approaches, in this study, we propose anovel fully automated framework incorporating a 3D-UNet backbone architecturewith Embedded multichannel Attention mechanism and LSTM based Recurrent neuralnetworks (RNN) for the MVM-CMR datasets (dubbed 3D-EAR segmentor). The proposedmethod also utilises the amalgamation of magnitude and phase images as input torealise an information fusion of this multichannel dataset and exploring thecorrelations of temporal frames via the embedded RNN. By comparing the baselinemodel of 3D-UNet and ablation studies with and without embedded attentive LSTMmodules and various loss functions, we can demonstrate that the proposed modelhas outperformed the state-of-the-art baseline models with significantimprovement.

Conference paper

Roberts M, Driggs D, Thorpe M, Gilbey J, Yeung M, Ursprung S, Aviles-Rivero AI, Etmann C, McCague C, Beer L, Weir-McCall JR, Teng Z, Gkrania-Klotsas E, Ruggiero A, Korhonen A, Jefferson E, Ako E, Langs G, Gozaliasl G, Yang G, Prosch H, Preller J, Stanczuk J, Tang J, Hofmanninger J, Babar J, Sánchez LE, Thillai M, Gonzalez PM, Teare P, Zhu X, Patel M, Cafolla C, Azadbakht H, Jacob J, Lowe J, Zhang K, Bradley K, Wassin M, Holzer M, Ji K, Ortet MD, Ai T, Walton N, Lio P, Stranks S, Shadbahr T, Lin W, Zha Y, Niu Z, Rudd JHF, Sala E, Schönlieb CBet al., 2021, Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans, Nature Machine Intelligence, Vol: 3, Pages: 199-217

Machine learning methods offer great promise for fast and accurate detection and prognostication of coronavirus disease 2019 (COVID-19) from standard-of-care chest radiographs (CXR) and chest computed tomography (CT) images. Many articles have been published in 2020 describing new machine learning-based models for both of these tasks, but it is unclear which are of potential clinical utility. In this systematic review, we consider all published papers and preprints, for the period from 1 January 2020 to 3 October 2020, which describe new machine learning models for the diagnosis or prognosis of COVID-19 from CXR or CT images. All manuscripts uploaded to bioRxiv, medRxiv and arXiv along with all entries in EMBASE and MEDLINE in this timeframe are considered. Our search identified 2,212 studies, of which 415 were included after initial screening and, after quality screening, 62 studies were included in this systematic review. Our review finds that none of the models identified are of potential clinical use due to methodological flaws and/or underlying biases. This is a major weakness, given the urgency with which validated COVID-19 models are needed. To address this, we give many recommendations which, if followed, will solve these issues and lead to higher-quality model development and well-documented manuscripts.

Journal article

Wang C, Yang G, Papanastasiou G, Tsaftaris S, Newby D, Gray C, Macnaught G, MacGillivray Tet al., 2021, DiCyc: GAN-based deformation invariant cross-domain information fusion for medical image synthesis, Information Fusion, Vol: 67, Pages: 147-160, ISSN: 1566-2535

Cycle-consistent generative adversarial network (CycleGAN) has been widely used for cross-domain medical image synthesis tasks particularly due to its ability to deal with unpaired data. However, most CycleGAN-based synthesis methods cannot achieve good alignment between the synthesized images and data from the source domain, even with additional image alignment losses. This is because the CycleGAN generator network can encode the relative deformations and noises associated to different domains. This can be detrimental for the downstream applications that rely on the synthesized images, such as generating pseudo-CT for PET-MR attenuation correction. In this paper, we present a deformation invariant cycle-consistency model that can filter out these domain-specific deformation. The deformation is globally parameterized by thin-plate-spline (TPS), and locally learned by modified deformable convolutional layers. Robustness to domain-specific deformations has been evaluated through experiments on multi-sequence brain MR data and multi-modality abdominal CT and MR data. Experiment results demonstrated that our method can achieve better alignment between the source and target data while maintaining superior image quality of signal compared to several state-of-the-art CycleGAN-based methods.

Journal article

Wu Y, Hatipoglu S, Alonso-Álvarez D, Gatehouse P, Li B, Gao Y, Firmin D, Keegan J, Yang Get al., 2021, Fast and automated segmentation for the three-directional multi-slice cine myocardial velocity mapping, Diagnostics, Vol: 11, ISSN: 2075-4418

Three-directional cine multi-slice left ventricular myocardial velocity mapping (3Dir MVM) is a cardiac magnetic resonance (CMR) technique that allows the assessment of cardiac motion in three orthogonal directions. Accurate and reproducible delineation of the myocardium is crucial for accurate analysis of peak systolic and diastolic myocardial velocities. In addition to the conventionally available magnitude CMR data, 3Dir MVM also provides three orthogonal phase velocity mapping datasets, which are used to generate velocity maps. These velocity maps may also be used to facilitate and improve the myocardial delineation. Based on the success of deep learning in medical image processing, we propose a novel fast and automated framework that improves the standard U-Net-based methods on these CMR multi-channel data (magnitude and phase velocity mapping) by cross-channel fusion with an attention module and the shape information-based post-processing to achieve accurate delineation of both epicardial and endocardial contours. To evaluate the results, we employ the widely used Dice Scores and the quantification of myocardial longitudinal peak velocities. Our proposed network trained with multi-channel data shows superior performance compared to standard U-Net-based networks trained on single-channel data. The obtained results are promising and provide compelling evidence for the design and application of our multi-channel image analysis of the 3Dir MVM CMR data.

Journal article

Wu Y, Hatipoglu S, Alonso-Álvarez D, Gatehouse P, Firmin D, Keegan J, Yang Get al., 2021, Automated multi-channel segmentation for the 4D myocardial velocity mapping cardiac MR, Medical Imaging 2021: Computer-Aided Diagnosis, Publisher: SPIE, Pages: 1-7

Four-dimensional (4D) left ventricular myocardial velocity mapping (MVM) is a cardiac magnetic resonance (CMR) technique that allows assessment of cardiac motion in three orthogonal directions. Accurate and reproducible delineation of the myocardium is crucial for accurate analysis of peak systolic and diastolic myocardial velocities. In addition to the conventionally available magnitude CMR data, 4D MVM also acquires three velocity-encoded phase datasets which are used to generate velocity maps. These can be used to facilitate and improve myocardial delineation. Based on the success of deep learning in medical image processing, we propose a novel automated framework that improves the standard U-Net based methods on these CMR multi-channel data (magnitude and phase) by cross-channel fusion with attention module and shape information based post-processing to achieve accurate delineation of both epicardium and endocardium contours. To evaluate the results, we employ the widely used Dice scores and the quantification of myocardial longitudinal peak velocities. Our proposed network trained with multi-channel data shows enhanced performance compared to standard UNet based networks trained with single-channel data. Based on the results, our method provides compelling evidence for the design and application for the multi-channel image analysis of the 4D MVM CMR data.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00874642&limit=30&person=true&page=5&respub-action=search.html