Imperial College London

DrGuangYang

Faculty of EngineeringDepartment of Bioengineering

Senior Lecturer
 
 
 
//

Contact

 

g.yang Website

 
 
//

Location

 

229Sir Michael Uren HubWhite City Campus

//

Summary

 

Publications

Publication Type
Year
to

272 results found

Shi Z, Jiang M, Li Y, Wei B, Wang Z, Wu Y, Tan T, Yang Get al., 2024, MLC: Multi-level consistency learning for semi-supervised left atrium segmentation, Expert Systems with Applications, Vol: 244, ISSN: 0957-4174

Atrial fibrillation is the most common type of arrhythmia associated with a high mortality rate. Left atrium segmentation is crucial for the diagnosis and treatment of atrial fibrillation. Accurate left atrium segmentation with limited labeled data is a tricky problem. In this paper, a novel multi-level consistency semi-supervised learning method is proposed for left atrium segmentation from 3D magnetic resonance images. The proposed framework can efficiently utilize limited labeled data and large amounts of unlabeled data by performing consistency predictions under task level, data level, and feature level perturbations. For task consistency, the segmentation results and signed distance maps were used for both segmentation and distance estimation tasks. For data level perturbation, random flips (horizontal or vertical) were introduced for unlabeled data. Moreover, based on virtual adversarial training, we design a multi-layer feature perturbation in the structure of skipping connection. Our method is evaluated on the publicly available Left Atrium Segmentation Challenge dataset version 2018. For the model trained with a label rate of 20%, the evaluation metrics Dice, Jaccard, ASD, and 95HD are 91.69%, 84.71%, 1.43 voxel, and 5.44 voxel, respectively. The experimental results show that the proposed method outperforms other semi-supervised learning methods and even achieves better performance than the fully supervised V-Net.

Journal article

Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CHet al., 2024, US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation., Comput Biol Med, Vol: 172

Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.

Journal article

Fang Y, Xing X, Wang S, Walsh S, Yang Get al., 2024, Post-COVID highlights: Challenges and solutions of artificial intelligence techniques for swift identification of COVID-19., Curr Opin Struct Biol, Vol: 85

Since the onset of the COVID-19 pandemic in 2019, there has been a concerted effort to develop cost-effective, non-invasive, and rapid AI-based tools. These tools were intended to alleviate the burden on healthcare systems, control the rapid spread of the virus, and enhance intervention outcomes, all in response to this unprecedented global crisis. As we transition into a post-COVID era, we retrospectively evaluate these proposed studies and offer a review of the techniques employed in AI diagnostic models, with a focus on the solutions proposed for different challenges. This review endeavors to provide insights into the diverse solutions designed to address the multifaceted challenges that arose during the pandemic. By doing so, we aim to prepare the AI community for the development of AI tools tailored to address public health emergencies effectively.

Journal article

Tang B, Niu Z, Wang X, Huang J, Ma C, Peng J, Jiang Y, Ge R, Hu H, Lin L, Yang Get al., 2024, Automated molecular structure segmentation from documents using ChemSAM., J Cheminform, Vol: 16, ISSN: 1758-2946

Chemical structure segmentation constitutes a pivotal task in cheminformatics, involving the extraction and abstraction of structural information of chemical compounds from text-based sources, including patents and scientific articles. This study introduces a deep learning approach to chemical structure segmentation, employing a Vision Transformer (ViT) to discern the structural patterns of chemical compounds from their graphical representations. The Chemistry-Segment Anything Model (ChemSAM) achieves state-of-the-art results on publicly available benchmark datasets and real-world tasks, underscoring its effectiveness in accurately segmenting chemical structures from text-based sources. Moreover, this deep learning-based approach obviates the need for handcrafted features and demonstrates robustness against variations in image quality and style. During the detection phase, a ViT-based encoder-decoder model is used to identify and locate chemical structure depictions on the input page. This model generates masks to ascertain whether each pixel belongs to a chemical structure, thereby offering a pixel-level classification and indicating the presence or absence of chemical structures at each position. Subsequently, the generated masks are clustered based on their connectivity, and each mask cluster is updated to encapsulate a single structure in the post-processing workflow. This two-step process facilitates the effective automatic extraction of chemical structure depictions from documents. By utilizing the deep learning approach described herein, it is demonstrated that effective performance on low-resolution and densely arranged molecular structural layouts in journal articles and patents is achievable.

Journal article

Huang J, Ferreira P, Wang L, Wu Y, Aviles-Rivero A, Schonlieb C-B, Scott A, Khalique Z, Dwornik D, Rajakulasingam R, De Silva R, Pennell D, Nielles-Vallespin S, Yang Get al., 2024, Deep Learning-based Diffusion Tensor Cardiac Magnetic Resonance Reconstruction: A Comparison Study, Scientific Reports, ISSN: 2045-2322

Journal article

Zhao X, Wang T, Chen J, Jiang B, Li H, Zhang N, Yang G, Chai Set al., 2024, GLRP: Global and local contrastive learning based on relative position for medical image segmentation on cardiac MRI, International Journal of Imaging Systems and Technology, Vol: 34, ISSN: 0899-9457

Contrastive learning, as an unsupervised technique, is widely employed in image segmentation to enhance segmentation performance even when working with small labeled datasets. However, generating positive and negative data pairs for medical image segmentation poses a challenge due to the presence of similar tissues and organs across different slices in datasets. To tackle this issue, we propose a novel contrastive learning strategy that leverages the relative position differences between image slices. Additionally, we combine global and local features to address this problem effectively. In order to enhance segmentation accuracy and reduce isolated mis-segmented regions, we employ a two-dimensional fully connected conditional random field for iterative optimization of the segmentation results. With only 10 labeled samples, our proposed method is able to achieve average dice scores of 0.876 and 0.899 on the public and private dataset heart segmentation tasks, surpassing the PCL method's 0.801 and 0.852. Experimental results on both public and private MRI datasets demonstrate that our proposed method yields significant improvements in medical segmentation tasks with limited annotated samples, outperforming existing semi-supervised and self-supervised techniques.

Journal article

Papanastasiou G, Dikaios N, Huang J, Wang C, Yang Get al., 2024, Is Attention all You Need in Medical Image Analysis? A Review., IEEE J Biomed Health Inform, Vol: 28, Pages: 1398-1411

Medical imaging is a key component in clinical diagnosis, treatment planning and clinical trial design, accounting for almost 90% of all healthcare data. CNNs achieved performance gains in medical image analysis (MIA) over the last years. CNNs can efficiently model local pixel interactions and be trained on small-scale MI data. Despite their important advances, typical CNN have relatively limited capabilities in modelling "global" pixel interactions, which restricts their generalisation ability to understand out-of-distribution data with different "global" information. The recent progress of Artificial Intelligence gave rise to Transformers, which can learn global relationships from data. However, full Transformer models need to be trained on large-scale data and involve tremendous computational complexity. Attention and Transformer compartments ("Transf/Attention") which can well maintain properties for modelling global relationships, have been proposed as lighter alternatives of full Transformers. Recently, there is an increasing trend to co-pollinate complementary local-global properties from CNN and Transf/Attention architectures, which led to a new era of hybrid models. The past years have witnessed substantial growth in hybrid CNN-Transf/Attention models across diverse MIA problems. In this systematic review, we survey existing hybrid CNN-Transf/Attention models, review and unravel key architectural designs, analyse breakthroughs, and evaluate current and future opportunities as well as challenges. We also introduced an analysis framework on generalisation opportunities of scientific and clinical impact, based on which new data-driven domain generalisation and adaptation methods can be stimulated.

Journal article

Lu S, Yan Z, Chen W, Cheng T, Zhang Z, Yang Get al., 2024, Dual consistency regularization with subjective logic for semi-supervised medical image segmentation., Comput Biol Med, Vol: 170

Semi-supervised learning plays a vital role in computer vision tasks, particularly in medical image analysis. It significantly reduces the time and cost involved in labeling data. Current methods primarily focus on consistency regularization and the generation of pseudo labels. However, due to the model's poor awareness of unlabeled data, aforementioned methods may misguide the model. To alleviate this problem, we propose a dual consistency regularization with subjective logic for semi-supervised medical image segmentation. Specifically, we introduce subjective logic into our semi-supervised medical image segmentation task to estimate uncertainty, and based on the consistency hypothesis, we construct dual consistency regularization under weak and strong perturbations to guide the model's learning from unlabeled data. To evaluate the performance of the proposed method, we performed experiments on three widely used datasets: ACDC, LA, and Pancreas. Experiments show that the proposed method achieved improvement compared with other state-of-the-art (SOTA) methods.

Journal article

Wu Y, Jewell S, Xing X, Nan Y, Strong AJ, Yang G, Boutelle MGet al., 2024, Real-time non-invasive imaging and detection of spreading depolarizations through EEG: an ultra-light explainable deep learning approach, IEEE Journal of Biomedical and Health Informatics, Pages: 1-12, ISSN: 2168-2208

A core aim of neurocritical care is to prevent secondary brain injury. Spreading depolarizations (SDs) have been identified as an important independent cause of secondary brain injury. SDs are usually detected using invasive electrocorticography recorded at high sampling frequency. Recent pilot studies suggest a possible utility of scalp electrodes generated electroencephalogram (EEG) for non-invasive SD detection. However, noise and attenuation of EEG signals makes this detection task extremely challenging. Previous methods focus on detecting temporal power change of EEG over a fixed high-density map of scalp electrodes, which is not always clinically feasible. Having a specialized spectrogram as an input to the automatic SD detection model, this study is the first to transform SD identification problem from a detection task on a 1-D time-series wave to a task on a sequential 2-D rendered imaging. This study presented a novel ultra-light-weight multi-modal deep-learning network to fuse EEG spectrogram imaging and temporal power vectors to enhance SD identification accuracy over each single electrode, allowing flexible EEG map and paving the way for SD detection on ultra-low-density EEG with variable electrode positioning. Our proposed model has an ultra-fast processing speed (<0.3 sec). Compared to the conventional methods (2 hours), this is a huge advancement towards early SD detection and to facilitate instant brain injury prognosis. Seeing SDs with a new dimension – frequency on spectrograms, we demonstrated that such additional dimension could improve SD detection accuracy, providing preliminary evidence to support the hypothesis that SDs may show implicit features over the frequency profile.

Journal article

Liu Y, Shah P, Yu Y, Horsey J, Ouyang J, Jiang B, Yang G, Heit JJ, McCullough-Hicks ME, Hugdal SM, Wintermark M, Michel P, Liebeskind DS, Lansberg MG, Albers GW, Zaharchuk Get al., 2024, A Clinical and Imaging Fused Deep Learning Model Matches Expert Clinician Prediction of 90-Day Stroke Outcomes., AJNR Am J Neuroradiol

BACKGROUND AND PURPOSE: Predicting long-term clinical outcome in acute ischemic stroke is beneficial for prognosis, clinical trial design, resource management, and patient expectations. This study used a deep learning-based predictive model (DLPD) to predict 90-day mRS outcomes and compared its predictions with those made by physicians. MATERIALS AND METHODS: A previously developed DLPD that incorporated DWI and clinical data from the acute period was used to predict 90-day mRS outcomes in 80 consecutive patients with acute ischemic stroke from a single-center registry. We assessed the predictions of the model alongside those of 5 physicians (2 stroke neurologists and 3 neuroradiologists provided with the same imaging and clinical information). The primary analysis was the agreement between the ordinal mRS predictions of the model or physician and the ground truth using the Gwet Agreement Coefficient. We also evaluated the ability to identify unfavorable outcomes (mRS >2) using the area under the curve, sensitivity, and specificity. Noninferiority analyses were undertaken using limits of 0.1 for the Gwet Agreement Coefficient and 0.05 for the area under the curve analysis. The accuracy of prediction was also assessed using the mean absolute error for prediction, percentage of predictions ±1 categories away from the ground truth (±1 accuracy [ACC]), and percentage of exact predictions (ACC). RESULTS: To predict the specific mRS score, the DLPD yielded a Gwet Agreement Coefficient score of 0.79 (95% CI, 0.71-0.86), surpassing the physicians' score of 0.76 (95% CI, 0.67-0.84), and was noninferior to the readers (P < .001). For identifying unfavorable outcome, the model achieved an area under the curve of 0.81 (95% CI, 0.72-0.89), again noninferior to the readers' area under the curve of 0.79 (95% CI, 0.69-0.87) (P < .005). The mean absolute error, ±1ACC, and ACC were 0.89, 81%, and 36% for the DLPD. CONCL

Journal article

Ding W, Sun Y, Huang J, Ju H, Zhang C, Yang G, Lin CTet al., 2024, RCAR-UNet: Retinal vessel segmentation network algorithm via novel rough attention mechanism, Information Sciences, Vol: 657, ISSN: 0020-0255

The health status of the retinal blood vessels is a significant reference for rapid and non-invasive diagnosis of various ophthalmological, diabetic, and cardio-cerebrovascular diseases. However, retinal vessels are characterized by ambiguous boundaries, with multiple thicknesses and obscured lesion areas. These phenomena cause deep neural networks to face the characteristic channel uncertainty when segmenting retinal blood vessels. The uncertainty in feature channels will affect the channel attention coefficient, making the deep neural network incapable of paying attention to the detailed features of retinal vessels. This study proposes a retinal vessel segmentation via a rough channel attention mechanism. First, the method integrates deep neural networks to learn complex features and rough sets to handle uncertainty for designing rough neurons. Second, a rough channel attention mechanism module is constructed based on rough neurons, and embedded in U-Net skip connection for the integration of high-level and low-level features. Then, the residual connections are added to transmit low-level features to high-level to enrich network feature extraction and help back-propagate the gradient when training the model. Finally, multiple comparison experiments were carried out on three public fundus retinal image datasets to verify the validity of Rough Channel Attention Residual U-Net (RCAR-UNet) model. The results show that the RCAR-UNet model offers high superiority in accuracy, sensitivity, F1, and Jaccard similarity, especially for the precise segmentation of fragile blood vessels, guaranteeing blood vessels’ continuity.

Journal article

Dayarathna S, Islam KT, Uribe S, Yang G, Hayat M, Chen Zet al., 2024, Deep learning based synthesis of MRI, CT and PET: Review and analysis., Med Image Anal, Vol: 92

Medical image synthesis represents a critical area of research in clinical decision-making, aiming to overcome the challenges associated with acquiring multiple image modalities for an accurate clinical workflow. This approach proves beneficial in estimating an image of a desired modality from a given source modality among the most common medical imaging contrasts, such as Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and Positron Emission Tomography (PET). However, translating between two image modalities presents difficulties due to the complex and non-linear domain mappings. Deep learning-based generative modelling has exhibited superior performance in synthetic image contrast applications compared to conventional image synthesis methods. This survey comprehensively reviews deep learning-based medical imaging translation from 2018 to 2023 on pseudo-CT, synthetic MR, and synthetic PET. We provide an overview of synthetic contrasts in medical imaging and the most frequently employed deep learning networks for medical image synthesis. Additionally, we conduct a detailed analysis of each synthesis method, focusing on their diverse model designs based on input domains and network architectures. We also analyse novel network architectures, ranging from conventional CNNs to the recent Transformer and Diffusion models. This analysis includes comparing loss functions, available datasets and anatomical regions, and image quality assessments and performance in other downstream tasks. Finally, we discuss the challenges and identify solutions within the literature, suggesting possible future directions. We hope that the insights offered in this survey paper will serve as a valuable roadmap for researchers in the field of medical image synthesis.

Journal article

Tänzer M, Wang F, Qiao M, Bai W, Rueckert D, Yang G, Nielles-Vallespin Set al., 2024, T1/T2 Relaxation Temporal Modelling from Accelerated Acquisitions Using a Latent Transformer, Pages: 293-302, ISBN: 9783031524479

Quantitative cardiac magnetic resonance T1 and T2 mapping enable myocardial tissue characterisation but the lengthy scan times restrict their widespread clinical application. We propose a deep learning method that incorporates a time dependency Latent Transformer module to model relationships between parameterised time frames for improved reconstruction from undersampled data. The module, implemented as a multi-resolution sequence-to-sequence transformer, is integrated into an encoder-decoder architecture to leverage the inherent temporal correlations in relaxation processes. The presented results for accelerated T1 and T2 mapping show the model recovers maps with higher fidelity by explicit incorporation of time dynamics. This work demonstrates the importance of temporal modelling for artifact-free reconstruction in quantitative MRI.

Book chapter

Rani S, Jain A, Kumar A, Yang Get al., 2024, CCheXR-Attention: Clinical concept extraction and chest x-ray reports classification using modified Mogrifier and bidirectional LSTM with multihead attention, International Journal of Imaging Systems and Technology, Vol: 34, ISSN: 0899-9457

Radiology reports cover different aspects from radiological observation to the diagnosis of an imaging examination, such as x-rays, magnetic resonance imaging, and computed tomography scans. Abundant patient information presented in radiology reports poses a few major challenges. First, radiology reports follow a free-text reporting format, which causes the loss of a large amount of information in unstructured text. Second, the extraction of important features from these reports is a huge bottleneck for machine learning models. These challenges are important, particularly the extraction of key features such as symptoms, comparison/priors, technique, finding, and impression because they facilitate the decision-making on patients' health. To alleviate this issue, a novel architecture CCheXR-Attention is proposed to extract the clinical features from the radiological reports and classify each report into normal and abnormal categories based on the extracted information. We have proposed a modified Mogrifier long short-term memory model and integrated a multihead attention method to extract the more relevant features. Experimental outcomes on two benchmark datasets demonstrated that the proposed model surpassed state-of-the-art models.

Journal article

Tänzer M, Ferreira P, Scott A, Khalique Z, Dwornik M, Rajakulasingam R, de Silva R, Pennell D, Yang G, Rueckert D, Nielles-Vallespin Set al., 2024, Correction to: Faster Diffusion Cardiac MRI with Deep Learning-Based Breath Hold Reduction, Medical Image Understanding and Analysis, Publisher: Springer International Publishing, Pages: C1-C1, ISBN: 9783031120527

Book chapter

Papanastasiou G, Yang G, Fotiadis DI, Dikaios N, Wang C, Huda A, Sobolevsky L, Raasch J, Perez E, Sidhu G, Palumbo Det al., 2023, Large-scale deep learning analysis to identify adult patients at risk for combined and common variable immunodeficiencies., Commun Med (Lond), Vol: 3

BACKGROUND: Primary immunodeficiency (PI) is a group of heterogeneous disorders resulting from immune system defects. Over 70% of PI is undiagnosed, leading to increased mortality, co-morbidity and healthcare costs. Among PI disorders, combined immunodeficiencies (CID) are characterized by complex immune defects. Common variable immunodeficiency (CVID) is among the most common types of PI. In light of available treatments, it is critical to identify adult patients at risk for CID and CVID, before the development of serious morbidity and mortality. METHODS: We developed a deep learning-based method (named "TabMLPNet") to analyze clinical history from nationally representative medical claims from electronic health records (Optum® data, covering all US), evaluated in the setting of identifying CID/CVID in adults. Further, we revealed the most important CID/CVID-associated antecedent phenotype combinations. Four large cohorts were generated: a total of 47,660 PI cases and (1:1 matched) controls. RESULTS: The sensitivity/specificity of TabMLPNet modeling ranges from 0.82-0.88/0.82-0.85 across cohorts. Distinctive combinations of antecedent phenotypes associated with CID/CVID are identified, consisting of respiratory infections/conditions, genetic anomalies, cardiac defects, autoimmune diseases, blood disorders and malignancies, which can possibly be useful to systematize the identification of CID and CVID. CONCLUSIONS: We demonstrated an accurate method in terms of CID and CVID detection evaluated on large-scale medical claims data. Our predictive scheme can potentially lead to the development of new clinical insights and expanded guidelines for identification of adult patients at risk for CID and CVID as well as be used to improve patient outcomes on population level.

Journal article

Luo H, Gong Y, Chen S, Yu C, Yang G, Yu F, Hu Z, Tian Xet al., 2023, Prediction of Global Ionospheric Total Electron Content (TEC) Based on SAM-ConvLSTM Model, Space Weather, Vol: 21

This paper first applies a prediction model based on self-attention memory ConvLSTM (SAM-ConvLSTM) to predict the global ionospheric total electron content (TEC) maps with up to 1 day of lead time. We choose the global ionospheric TEC maps released by the Center for Orbit Determination in Europe (CODE) as the training data set covering the period from 1999 to 2022. Besides that, we put several space environment data as additional multivariate-features into the framework of the prediction model to enhance its forecasting ability. In order to confirm the efficiency of the proposed model, the other two prediction models based on convolutional long short-term memory (LSTM) are used for comparison. The three models are trained and evaluated on the same data set. Results show that the proposed SAM-ConvLSTM prediction model performs more accurately than the other two models, and more stably under space weather events. In order to assess the generalization capabilities of the proposed model amidst severe space weather occurrences, we selected the period of 22–25 April 2023, characterized by a potent geomagnetic storm, for experimental validation. Subsequently, we employed the 1-day predicted global TEC products from the Center for Operational Products and Services (COPG) and the SAM-ConvLSTM model to evaluate their respective forecasting prowess. The results show that the SAM-ConvLSTM prediction model achieves lower prediction error. In one word, the ionospheric TEC prediction model proposed in this paper can establish the ionosphere TEC of spatio-temporal data association for a long time, and realize high precision of prediction performance.

Journal article

Zhao B, Jin W, Del Ser J, Yang Get al., 2023, ChatAgri: Exploring potentials of ChatGPT on cross-linguistic agricultural text classification, NEUROCOMPUTING, Vol: 557, ISSN: 0925-2312

Journal article

Zhao B, Jin W, Zhang Y, Huang S, Yang Get al., 2023, Prompt learning for metonymy resolution: Enhancing performance with internal prior knowledge of pre-trained language models, KNOWLEDGE-BASED SYSTEMS, Vol: 279, ISSN: 0950-7051

Journal article

Zhou Z, Gao Y, Zhang W, Zhang N, Wang H, Wang R, Gao Z, Huang X, Zhou S, Dai X, Yang G, Zhang H, Nieman K, Xu Let al., 2023, Deep Learning-based Prediction of Percutaneous Recanalization in Chronic Total Occlusion Using Coronary CT Angiography., Radiology, Vol: 309

UNLABELLED: Background CT is helpful in guiding the revascularization of chronic total occlusion (CTO), but manual prediction scores of percutaneous coronary intervention (PCI) success have challenges. Deep learning (DL) is expected to predict success of PCI for CTO lesions more efficiently. Purpose To develop a DL model to predict guidewire crossing and PCI outcomes for CTO using coronary CT angiography (CCTA) and evaluate its performance compared with manual prediction scores. MATERIALS AND METHODS: Participants with CTO lesions were prospectively identified from one tertiary hospital between January 2018 and December 2021 as the training set to develop the DL prediction model for PCI of CTO, with fivefold cross validation. The algorithm was tested using an external test set prospectively enrolled from three tertiary hospitals between January 2021 and June 2022 with the same eligibility criteria. All participants underwent preprocedural CCTA within 1 month before PCI. The end points were guidewire crossing within 30 minutes and PCI success of CTO. UNLABELLED: Results A total of 534 participants (mean age, 57.7 years ± 10.8 [SD]; 417 [78.1%] men) with 565 CTO lesions were included. In the external test set (186 participants with 189 CTOs), the DL model saved 85.0% of the reconstruction and analysis time of manual scores (mean, 73.7 seconds vs 418.2-466.9 seconds) and had higher accuracy than manual scores in predicting guidewire crossing within 30 minutes (DL, 91.0%; CT Registry of Chronic Total Occlusion Revascularization, 61.9%; Korean Multicenter CTO CT Registry [KCCT], 68.3%; CCTA-derived Multicenter CTO Registry of Japan (J-CTO), 68.8%; P < .05) and PCI success (DL, 93.7%; KCCT, 74.6%; J-CTO, 75.1%; P < .05). For DL, the area under the receiver operating characteristic curve was 0.97 (95% CI: 0.89, 0.99) for the training test set and 0.96 (95% CI: 0.90, 0.98) for the external test set. Conclusion The DL prediction model accurately predicted the pe

Journal article

Xing X, Del Ser J, Wu Y, Li Y, Xia J, Xu L, Firmin D, Gatehouse P, Yang Get al., 2023, HDL: hybrid deep learning for the synthesis of myocardial velocity maps in digital twins for cardiac analysis, IEEE Journal of Biomedical and Health Informatics, Vol: 27, Pages: 5134-5142, ISSN: 2168-2194

Synthetic digital twins based on medical data accelerate the acquisition, labelling and decision making procedure indigital healthcare. A core part of digital healthcare twins is model based data synthesis, which permits the generation of realisticmedical signals without requiring to cope with the modelling complexity of anatomical and biochemical phenomena producing themin reality. Unfortunately, algorithms for cardiac data synthesis havebeen so far scarcely studied in the literature. An important imagingmodality in the cardiac examination is three-directional CINE multi-slice myocardial velocity mapping (3Dir MVM), which provides aquantitative assessment of cardiac motion in three orthogonal directions of the left ventricle. The long acquisition time and complexacquisition produce make it more urgent to produce syntheticdigital twins of this imaging modality. In this study, we proposea hybrid deep learning (HDL) network, especially for synthetic 3DirMVM data. Our algorithm is featured by a hybrid UNet and a Generative Adversarial Network with a foreground-background generation scheme. The experimental results show that from temporallydown-sampled magnitude CINE images (six times), our proposedalgorithm can still successfully synthesise high temporal resolution 3Dir MVM CMR data (PSNR=42.32) with precise left ventriclesegmentation (DICE=0.92). These performance scores indicate thatour proposed HDL algorithm can be implemented in real-worlddigital twins for myocardial velocity mapping data simulation. Tothe best of our knowledge, this work is the first one in the literatureinvestigating digital twins of the 3Dir MVM CMR, which has showngreat potential for improving the efficiency of clinical studies viasynthesised cardiac data.

Journal article

Li H, Nan Y, Del Ser J, Yang Get al., 2023, Region-based evidential deep learning to quantify uncertainty and improve robustness of brain tumor segmentation, Neural Computing and Applications, Vol: 35, Pages: 22071-22085, ISSN: 0941-0643

Despite recent advances in the accuracy of brain tumor segmentation, the results still suffer from low reliability and robustness. Uncertainty estimation is an efficient solution to this problem, as it provides a measure of confidence in the segmentation results. The current uncertainty estimation methods based on quantile regression, Bayesian neural network, ensemble, and Monte Carlo dropout are limited by their high computational cost and inconsistency. In order to overcome these challenges, Evidential Deep Learning (EDL) was developed in recent work but primarily for natural image classification and showed inferior segmentation results. In this paper, we proposed a region-based EDL segmentation framework that can generate reliable uncertainty maps and accurate segmentation results, which is robust to noise and image corruption. We used the Theory of Evidence to interpret the output of a neural network as evidence values gathered from input features. Following Subjective Logic, evidence was parameterized as a Dirichlet distribution, and predicted probabilities were treated as subjective opinions. To evaluate the performance of our model on segmentation and uncertainty estimation, we conducted quantitative and qualitative experiments on the BraTS 2020 dataset. The results demonstrated the top performance of the proposed method in quantifying segmentation uncertainty and robustly segmenting tumors. Furthermore, our proposed new framework maintained the advantages of low computational cost and easy implementation and showed the potential for clinical application.

Journal article

Wang C, Zhang H, Papanastasiou G, Yang Get al., 2023, Advances in machine learning methods facilitating collaborative image-based decision making for neuroscience, FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, Vol: 17

Journal article

Li Y, Zhang Y, Liu J-Y, Wang K, Zhang K, Zhang G-S, Liao X-F, Yang Get al., 2023, Global transformer and dual local attention network via deep-shallow hierarchical feature fusion for retinal vessel segmentation, IEEE Transactions on Cybernetics, Vol: 53, Pages: 5826-5839, ISSN: 2168-2275

Clinically, retinal vessel segmentation is a significant step in the diagnosis of fundus diseases. However, recent methods generally neglect the difference of semantic information between deep and shallow features, which fail to capture the global and local characterizations in fundus images simultaneously, resulting in the limited segmentation performance for fine vessels. In this paper, a global transformer and dual local attention network via deep-shallow hierarchical feature fusion (GT-DLA-dsHFF) are investigated to solve the above limitations. First, the global transformer (GT) is developed to integrate the global information in the retinal image, which effectively captures the long-distance dependence between pixels, alleviating the discontinuity of blood vessels in the segmentation results. Second, the dual local attention (DLA), which is constructed using dilated convolutions with varied dilation rates, unsupervised edge detection, and squeeze-excitation block, is proposed to extract local vessel information, consolidating the edge details in the segmentation result. Finally, a novel deepshallow hierarchical feature fusion (dsHFF) algorithm is studied to fuse the features in different scales in the deep learning framework respectively, which can mitigate the attenuation of valid information in the process of feature fusion. We verified the GT-DLA-dsHFF on four typical fundus image datasets. The experimental results demonstrate our GT-DLA-dsHFF achieves superior performance against the current methods and detailed discussions verify the efficacy of the proposed three modules. Segmentation results on diseased images show the robustness of our proposed GT-DLA-dsHFF. Our codes will be available on https://github.com/YangLibuaa/GT-DLA-dsHFF.

Journal article

Lu S, Zhang Z, Yan Z, Wang Y, Cheng T, Zhou R, Yang Get al., 2023, Mutually aided uncertainty incorporated dual consistency regularization with pseudo label for semi-supervised medical image segmentation, Neurocomputing, Vol: 548, ISSN: 0925-2312

Semi-supervised learning has contributed plenty to promoting computer vision tasks. Especially concerning medical images, semi-supervised image segmentation can significantly reduce the labor and time cost of labeling images. Among the existing semi-supervised methods, pseudo-labelling and consistency regularization prevail; however, the current related methods still need to achieve satisfactory results due to the poor quality of the pseudo-labels generated and needing more certainty awareness the models. To address this problem, we propose a novel method that combines pseudo-labelling with dual consistency regularization based on a high capability of uncertainty awareness. This method leverages a cycle-loss regularized to lead to a more accurate uncertainty estimate. Followed by the uncertainty estimation, the certain region with its pseudo-label is further trained in a supervised manner. In contrast, the uncertain region is used to promote the dual consistency between the student and teacher networks. The developed approach was tested on three public datasets and showed that: 1) The proposed method achieves excellent performance improvement by leveraging unlabeled data; 2) Compared with several state-of-the-art (SOTA) semi-supervised segmentation methods, ours achieved better or comparable performance.

Journal article

Xing X, Papanastasiou G, Walsh S, Yang Get al., 2023, Less is more: unsupervised mask-guided annotated CT image synthesis with minimum manual segmentations, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 2566-2576, ISSN: 0278-0062

As a pragmatic data augmentation tool, data synthesis has generally returned dividends in performance for deep learning based medical image analysis. However, generating corresponding segmentation masks for synthetic medical images is laborious and subjective. To obtain paired synthetic medical images and segmentations, conditional generative models that use segmentation masks as synthesis conditions were proposed. However, these segmentation mask-conditioned generative models still relied on large, varied, and labeled training datasets, and they could only provide limited constraints on human anatomical structures, leading to unrealistic image features. Moreover, the invariant pixel-level conditions could reduce the variety of synthetic lesions and thus reduce the efficacy of data augmentation. To address these issues, in this work, we propose a novel strategy for medical image synthesis, namely Unsupervised Mask (UM)-guided synthesis, to obtain both synthetic images and segmentations using limited manual segmentation labels. We first develop a superpixel based algorithm to generate unsupervised structural guidance and then design a conditional generative model to synthesize images and annotations simultaneously from those unsupervised masks in a semi-supervised multi-task setting. In addition, we devise a multi-scale multi-task Fréchet Inception Distance (MM-FID) and multi-scale multi-task standard deviation (MM-STD) to harness both fidelity and variety evaluations of synthetic CT images. With multiple analyses on different scales, we could produce stable image quality measurements with high reproducibility. Compared with the segmentation mask guided synthesis, our UM-guided synthesis provided high-quality synthetic images with significantly higher fidelity, variety, and utility (p < 0.05 by Wilcoxon Signed Ranked test).

Journal article

Liu Y, Yu Y, Ouyang J, Jiang B, Yang G, Ostmeier S, Wintermark M, Michel P, Liebeskind DS, Lansberg MG, Albers GW, Zaharchuk Get al., 2023, Functional Outcome Prediction in Acute Ischemic Stroke Using a Fused Imaging and Clinical Deep Learning Model, STROKE, Vol: 54, Pages: 2316-2327, ISSN: 0039-2499

Journal article

Li H, Tang Z, Nan Y, Yang Get al., 2023, Human treelike tubular structure segmentation: A comprehensive review and future perspectives (vol 151, 106241, 2022), COMPUTERS IN BIOLOGY AND MEDICINE, Vol: 163, ISSN: 0010-4825

Journal article

Deng F, Liu Z, Fang W, Niu L, Chu X, Cheng Q, Zhang Z, Zhou R, Yang Get al., 2023, MRI radiomics for brain metastasis sub-pathology classification from non-small cell lung cancer: a machine learning, multicenter study., Phys Eng Sci Med, Vol: 46, Pages: 1309-1320

The objective of this study is to develop a machine-learning model that can accurately distinguish between different histologic types of brain lesions in patients with non-small cell lung cancer (NSCLC) when it is not safe or feasible to perform a biopsy. To achieve this goal, the study utilized data from two patient cohorts: 116 patients from Xiangya Hospital and 35 patients from Yueyang Central Hospital. A total of eight machine learning algorithms, including Xgboost, were compared. Additionally, a 3-dimensional convolutional neural network was trained using transfer learning to further evaluate the performance of these models. The SHapley Additive exPlanations (SHAP) method was developed to determine the most important features in the best-performing model after hyperparameter optimization. The results showed that the area under the curve (AUC) for the classification of brain lesions as either lung adenocarcinoma or squamous carcinoma ranged from 0.60 to 0.87. The model based on single radiomics features extracted from contrast-enhanced T1 MRI and utilizing the Xgboost algorithm demonstrated the highest performance (AUC: 0.85) in the internal validation set and adequate performance (AUC: 0.80) in the independent external validation set. The SHAP values also revealed the impact of individual features on the classification results. In conclusion, the use of a radiomics model incorporating contrast-enhanced T1 MRI, Xgboost, and SHAP algorithms shows promise in accurately and interpretably identifying brain lesions in patients with NSCLC.

Journal article

Cho Y, Park S, Hwang SH, Ko M, Lim D-S, Yu CW, Park S-M, Kim M-N, Oh Y-W, Yang Get al., 2023, Aortic Annulus Detection Based on Deep Learning for Transcatheter Aortic Valve Replacement Using Cardiac Computed Tomography, JOURNAL OF KOREAN MEDICAL SCIENCE, Vol: 38, ISSN: 1011-8934

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00874642&limit=30&person=true