Publications
272 results found
Yang G, Alcazar C, Hu C, et al., 2022, Prolonging The Viability Of Induced Pluripotent Stem Cellderived Endothelial Cells For Treatment Of Peripheral Arterial Disease Using Engineered Biomaterials, Scientific Sessions of the American-Heart-Association on Vascular Discovery - From Genes to Medicine, Publisher: LIPPINCOTT WILLIAMS & WILKINS, ISSN: 1079-5642
Yang X, Li H, He W, et al., 2022, Quantification of changes in white matter tract fibers in idiopathic normal pressure hydrocephalus based on diffusion spectrum imaging, European Journal of Radiology, Vol: 149, ISSN: 0720-048X
Purpose:Patients with idiopathic normal pressure hydrocephalus (iNPH) present white-matter abnormalities. The analytical methods described to date only measure mean diffusion parameter alterations of iNPH-specific brain regions or in a certain fasciculus. This study quantitatively analyzed whether iNPH-tract abnormalities are confined to specific sections or involve entire fibers based on diffusion spectrum imaging (DSI).Method:Twenty-two patients with iNPH and 20 normally aging subjects were included. The 18 main tracts in the brain of each subject were extracted, and the diffusion parameters of 100 equidistant nodes on each fiber were calculated to quantitatively evaluate integrity changes in different regions along these tracts. Two diffusion metrics were measured, i.e., general fractional anisotropy (GFA) and fractional anisotropy (FA).Results:Compared to normally aging (P < 0.05), in iNPH, the GFA and FA of the left uncinate fasciculus and FA of the bilateral superior longitudinal fasciculus 1 were reduced in areas where the entire fiber was involved (%nodes with significant differences > 90%). Most other fasciculi detected presented GFA or FA alterations limited to specific regions. Increased and decreased GFA or FA co-occurred in different sections of the same fibers, including the corticospinal tract and left thalamic radiation posterior in iNPH.Conclusions:Few iNPH fibers presented diffusion abnormalities involving nearly all tracts. Most fiber abnormalities in iNPH were confined to specific areas, and different parts of the same fasciculus showed diverse diffusion alterations in few cases. This DSI-based tract analysis provided detailed information on iNPH white-matter changes.
Huang N, Yang G, Alcazar C, et al., 2022, SPATIALLY NANOPATTERNED SCAFFOLDS PROMOTE THE SURVIVAL OF INDUCED PLURIPOTENT STEM CELL-DERIVED ENDOTHELIAL CELLS IN THE ISCHEMIC LIMB, 6th World Congress of the Tissue-Engineering-and-Regenerative-Medicine-International-Society (TERMIS), Publisher: MARY ANN LIEBERT, INC, Pages: S583-S583, ISSN: 1937-3341
Wang C, Yang G, Papanastasiou G, 2022, Unsupervised image registration towards enhancing performance and explainability in cardiac and brain image analysis, Sensors, Vol: 22, ISSN: 1424-8220
Magnetic Resonance Imaging (MRI) typically recruits multiple sequences (defined here as “modalities”). As each modality is designed to offer different anatomical and functional clinical information, there are evident disparities in the imaging content across modalities. Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging, as for example before imaging biomarkers need to be derived and clinically evaluated across different MRI modalities, time phases and slices. Although commonly needed in real clinical scenarios, affine and non-rigid image registration is not extensively investigated using a single unsupervised model architecture. In our work, we present an unsupervised deep learning registration methodology that can accurately model affine and non-rigid transformations, simultaneously. Moreover, inverse-consistency is a fundamental inter-modality registration property that is not considered in deep learning registration algorithms. To address inverse consistency, our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent representations, and involves two factorised transformation networks (one per each encoder-decoder channel) and an inverse-consistency loss to learn topology-preserving anatomical transformations. Overall, our model (named “FIRE”) shows improved performances against the reference standard baseline method (i.e., Symmetric Normalization implemented using the ANTs toolbox) on multi-modality brain 2D and 3D MRI and intra-modality cardiac 4D MRI data experiments. We focus on explaining model-data components to enhance model explainability in medical image registration. On computational time experiments, we show that the FIRE model performs on a memory-saving mode, as it can inherently learn topology-preserving image registration directly in the training phase. We therefore demonstrate an efficient and ve
Xing X, Wu Y, Firmin D, et al., 2022, Synthetic velocity mapping cardiac MRI coupled with automated left ventricle segmentation, Publisher: ArXiv
Temporal patterns of cardiac motion provide important information for cardiacdisease diagnosis. This pattern could be obtained by three-directional CINEmulti-slice left ventricular myocardial velocity mapping (3Dir MVM), which is acardiac MR technique providing magnitude and phase information of themyocardial motion simultaneously. However, long acquisition time limits theusage of this technique by causing breathing artifacts, while shortening thetime causes low temporal resolution and may provide an inaccurate assessment ofcardiac motion. In this study, we proposed a frame synthesis algorithm toincrease the temporal resolution of 3Dir MVM data. Our algorithm is featured by1) three attention-based encoders which accept magnitude images, phase images,and myocardium segmentation masks respectively as inputs; 2) three decodersthat output the interpolated frames and corresponding myocardium segmentationresults; and 3) loss functions highlighting myocardium pixels. Our algorithmcan not only increase the temporal resolution 3Dir MVMs, but can also generatesthe myocardium segmentation results at the same time.
Bonmatí LM, Blanco AM, Suárez A, et al., 2022, CHAIMELEON project: creation of a pan-European repository of health imaging data for the development of AI-powered cancer management tools, Frontiers in Oncology, Vol: 12, ISSN: 2234-943X
The CHAIMELEON project aims to set up a pan-European repository of health imaging data to be openly reused in AIexperimentation for cancer management. This EU-funded project involves some of the most ambitious research in the fields ofbiomedical imaging, artificial intelligence and cancer treatment, addressing the currently four most prevalent types of cancerworldwide: lung, breast, prostate and colorectal. To allow this, clinical partners and external collaborators will populate the repository with multimodality (MR, CT, PET/CT) imaging and related clinical data for historic and newly diagnosed patients. Subsequently, AI developers will enable a multimodal analytical data engine facilitating the interpretation, extraction and exploitation of the information stored at the repository. The development and implementation of AI-powered pipelines will enableadvancement towards automating data deidentification, curation, annotation, integrity securing and image harmonization. By the end of the project, the usability and performance of the repository as a tool fostering AI experimentation will be technically validated, including a validation subphase by world-class European AI developers, participating in Open Challenges to the AI Community. Upon successful validation of the repository, a set of selected AI tools will undergo early in-silico validation in observational clinical studies coordinated by leading experts in the partner hospitals. Tool performance will be assessed, including external independent validation on hallmark clinical decisions in response to some of the currently most important clinical end points in cancer. The project brings together a consortium of 18 European partners including hospitals, universities, R&D Centers and private research companies, constituting an ecosystem of infrastructures, biobanks, AI/in-silico experimentation and cloud computing technologies in oncology.
Zhou X, Ye Q, Yang X, et al., 2022, AI-based medical e-diagnosis for fast and automatic ventricular volume measurement in patients with normal pressure hydrocephalus, Neural Computing and Applications, Vol: 35, Pages: 16011-16020, ISSN: 0941-0643
Based on CT and MRI images acquired from normal pressure hydrocephalus (NPH) patients, using machine learning methods, we aim to establish a multimodal and high-performance automatic ventricle segmentation method to achieve an efficient and accurate automatic measurement of the ventricular volume. First, we extract the brain CT and MRI images of 143 definite NPH patients. Second, we manually label the ventricular volume (VV) and intracranial volume (ICV). Then, we use the machine learning method to extract features and establish automatic ventricle segmentation model. Finally, we verify the reliability of the model and achieved automatic measurement of VV and ICV. In CT images, the Dice similarity coefficient (DSC), intraclass correlation coefficient (ICC), Pearson correlation, and Bland–Altman analysis of the automatic and manual segmentation result of the VV were 0.95, 0.99, 0.99, and 4.2 ± 2.6, respectively. The results of ICV were 0.96, 0.99, 0.99, and 6.0 ± 3.8, respectively. The whole process takes 3.4 ± 0.3 s. In MRI images, the DSC, ICC, Pearson correlation, and Bland–Altman analysis of the automatic and manual segmentation result of the VV were 0.94, 0.99, 0.99, and 2.0 ± 0.6, respectively. The results of ICV were 0.93, 0.99, 0.99, and 7.9 ± 3.8, respectively. The whole process took 1.9 ± 0.1 s. We have established a multimodal and high-performance automatic ventricle segmentation method to achieve efficient and accurate automatic measurement of the ventricular volume of NPH patients. This can help clinicians quickly and accurately understand the situation of NPH patient's ventricles.
Wang M, Zhang H, He Y, et al., 2022, Association Between Ischemic Stroke and COVID-19 in China: A Population-Based Retrospective Study, FRONTIERS IN MEDICINE, Vol: 8
Cui X, Zhang P, Li Y, et al., 2022, MCAL: an anatomical knowledge learning model for myocardial segmentation in 2D echocardiography, IEEE Transactions on Ultrasonics, Ferroelectrics and Frequency Control, Vol: 69, Pages: 1277-1287, ISSN: 0885-3010
Segmentation of the left ventricular (LV) myocardium in 2D echocardiography is essential for clinical decision making, especially in geometry measurement and index computation. However, segmenting the myocardium is a time-consuming process as well as challenging due to the fuzzy boundary caused by the low image quality. Previous methods based on deep Convolutional Neural Networks (CNN) employ the ground-truth label as class associations on the pixel-level segmentation, or use label information to regulate the shape of predicted outputs, works limit for effective feature enhancement for 2D echocardiography. We propose a training strategy named multi-constrained aggregate learning (referred as MCAL), which leverages anatomical knowledge learned through ground-truth labels to infer segmented parts and discriminate boundary pixels. The new framework encourages the model to focus on the features in accordance with the learned anatomical representations, and the training objectives incorporate a Boundary Distance Transform Weight (BDTW) to enforce a higher weight value on the boundary region, which helps to improve the segmentation accuracy. The proposed method is built as an end-to-end framework with a top-down, bottom-up architecture with skip convolution fusion blocks, and carried out on two datasets (our dataset and the public CAMUS dataset). The comparison study shows that the proposed network outperforms the other segmentation baseline models, indicating that our method is beneficial for boundary pixels discrimination in segmentation.
Chen Y, Schönlieb C-B, Liò P, et al., 2022, AI-based reconstruction for fast MRI – a systematic review and meta-analysis, Proceedings of the Institute of Electrical and Electronics Engineers (IEEE), Vol: 110, Pages: 224-245, ISSN: 0018-9219
Compressed sensing (CS) has been playing a key role in accelerating the magnetic resonance imaging (MRI) acquisition process. With the resurgence of artificial intelligence, deep neural networks and CS algorithms are being integrated to redefine the state of the art of fast MRI. The past several years have witnessed substantial growth in the complexity, diversity, and performance of deep learning-based CS techniques that are dedicated to fastMRI. In this meta-analysis, we systematically review the deep learning-based CS techniques for fast MRI, describe key model designs, highlight breakthroughs, and discuss promising directions. We have also introduced a comprehensive analysis framework and a classification system to assess the pivotal role of deep learning in CS-based accelerationfor MRI.
Ye Q, Gao Y, Ding W, et al., 2022, Robust weakly supervised learning for COVID-19 recognition using multi-center CT images, Applied Soft Computing, Vol: 116, ISSN: 1568-4946
The world is currently experiencing an ongoing pandemic of an infectious disease named coronavirus disease 2019 (i.e., COVID-19), which is caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Computed Tomography (CT) plays an important role in assessing the severity of the infection and can also be used to identify those symptomatic and asymptomatic COVID-19 carriers. With a surge of the cumulative number of COVID-19 patients, radiologists are increasingly stressed to examine the CT scans manually. Therefore, an automated 3D CT scan recognition tool is highly in demand since the manual analysis is time-consuming for radiologists and their fatigue can cause possible misjudgment. However, due to various technical specifications of CT scanners located in different hospitals, the appearance of CT images can be significantly different leading to the failure of many automated image recognition approaches. The multi-domain shift problem for the multi-center and multi-scanner studies is therefore nontrivial that is also crucial for a dependable recognition and critical for reproducible and objective diagnosis and prognosis. In this paper, we proposed a COVID-19 CT scan recognition model namely coronavirus information fusion and diagnosis network (CIFD-Net) that can efficiently handle the multi-domain shift problem via a new robust weakly supervised learning paradigm. Our model can resolve the problem of different appearance in CT scan images reliably and efficiently while attaining higher accuracy compared to other state-of-the-art methods.
Jun C, Zhang H, Mohiaddin R, et al., 2022, Adaptive hierarchical dual consistency for semi-supervised left atrium segmentation on cross-domain data, IEEE Transactions on Medical Imaging, Vol: 41, Pages: 420-433, ISSN: 0278-0062
Semi-supervised learning provides great significance in left atrium (LA) segmentation model learning with insufficient labelled data. Generalising semi supervised learning to cross-domain data is of high importance to further improve model robustness. However, the widely existing distribution difference and sample mismatch between different data domains hinder the generalisation of semi-supervised learning. In this study, we alleviate these problems by proposing an Adaptive Hier10 archical Dual Consistency (AHDC) for the semi-supervised LA segmentation on cross-domain data. The AHDC mainlyconsists of a Bidirectional Adversarial Inference module (BAI) and a Hierarchical Dual Consistency learning module (HDC). The BAI overcomes the difference of distributions and the sample mismatch between two different domains. It mainly learns two mapping networks adversarially to obtain two matched domains through mutual adaptation. The HDC investigates a hierarchical dual learning paradigm for cross-domain semi-supervised segmentation based on the obtained matched domains. It mainly builds two dual modelling networks for mining the complementary information in both intra-domain and inter-domain. For the intra domain learning, a consistency constraint is applied to the dual-modelling targets to exploit the complementary modelling information. For the inter-domain learning, a consistency constraint is applied to the LAs modelled by two dual modelling networks to exploit the complementary knowl28 edge among different data domains. We demonstrated the performance of our proposed AHDC on four 3D late gadolinium enhancement cardiac MR (LGE-CMR) datasets fromdifferent centres and a 3D CT dataset. Compared to otherstate-of-the-art methods, our proposed AHDC achievedhigher segmentation accuracy, which indicated its capability in the cross-domain semi-supervised LA segmentation.
Zhang H, Wu Y, He Y, et al., 2022, Age-Related Risk Factors and Complications of Patients With COVID-19: A Population-Based Retrospective Study, FRONTIERS IN MEDICINE, Vol: 8
Guan X, Yang G, Ye J, et al., 2022, 3D AGSE-VNet: an automatic brain tumor MRI data segmentation framework, BMC Medical Imaging, Vol: 22, ISSN: 1471-2342
Background Glioma is the most common brain malignant tumor, with a high morbidity rate and a mortality rate of more than three percent, which seriously endangers human health. The main method of acquiring brain tumors in the clinic is MRI. Segmentation of brain tumor regions from multi-modal MRI scan images is helpful for treatment inspection, post-diagnosis monitoring, and effect evaluation of patients. However, the common operation in clinical brain tumor segmentation is still manual segmentation, lead to its time-consuming and large performance difference between different operators, a consistent and accurate automatic segmentation method is urgently needed. With the continuous development of deep learning, researchers have designed many automatic segmentation algorithms; however, there are still some problems: 1) The research of segmentation algorithm mostly stays on the 2D plane, this will reduce the accuracy of 3D image feature extraction to a certain extent. 2) MRI images have gray-scale offset fields that make it difficult to divide the contours accurately.Methods To meet the above challenges, we propose an automatic brain tumor MRI data segmentation framework which is called AGSE-VNet. In our study, the Squeeze and Excite (SE) module is added to each encoder, the Attention Guide Filter (AG) module is added to each decoder, using the channel relationship to automatically enhance the useful information in the channel to suppress the useless information, and use the attention mechanism to guide the edge information and remove the influence of irrelevant information such as noise.Results We used the BraTS2020 challenge online verification tool to evaluate our approach. The focus of verification is that the Dice scores of the whole tumor (WT), tumor core (TC) and enhanced tumor (ET) are 0.68, 0.85 and 0.70, respectively.Conclusion Although MRI images have different intensities, AGSE-VNet is not affected by the size of the tumor, and can more accurately extract t
Hao J, Wang C, Yang G, et al., 2022, Annealing genetic GAN for imbalanced web data learning, IEEE Transactions on Multimedia, Vol: 24, Pages: 1164-1174, ISSN: 1520-9210
Class imbalance is one of the most basic and important problems of web data. The key to overcoming the class imbalance problems is to increase the effective instances of the minority, that is, data augmentation. Generative Adversarial Networks (GANs), which have recently been successfully applied in the field of image generation, can be used for data augmentation because they can learn the data distribution given ample training data instances and generate more data. However, learning the distributions from the imbalanced data can make GANs easily get stuck in a local optimum. In this work, we propose a new training strategy called Annealing Genetic GAN (AGGAN), which incorporates simulated annealing genetic algorithm into the training process of GANs. And this can help GANs avoid the local optimum trapping problem, which easily occurs when the training set is imbalanced. Unlike existing GANs, which use a fixed adversarial learning objective alternately training a generator, we use multiple adversarial learning objectives to train a set of generators and use the Metropolis criterion in simulated annealing to decide whether the generator should update. More specifically, the Metropolis criterion accepts worse solutions with a certain probability, so it can make our AGGAN escape from the local optimum and find a better solution. Theory and mathematical analysis provide strong theoretical support for the proposed training strategy. And experiments on several datasets demonstrate that AGGAN achieves convincing ability to solve the class imbalanced problem and reduces the training problems inherent in existing GANs.
Chen J, Yang G, Khan H, et al., 2022, JAS-GAN: generative adversarial network based joint atrium and scar segmentations on unbalanced atrial targets, IEEE Journal of Biomedical and Health Informatics, Vol: 26, Pages: 103-114, ISSN: 2168-2194
Automated and accurate segmentation of the left atrium (LA) and atrial scars from late gadolinium-enhanced cardiac magnetic resonance (LGE CMR) images are in high demand for quantifying atrial scars. The previous quantification of atrial scars relies on a two-phase segmentation for LA and atrial scars due to their large volume difference (unbalanced atrial targets). In this paper, we propose an inter-cascade generative adversarial network, namely JAS-GAN, to segment the unbalanced atrial targets from LGE CMR images automatically and accurately in an end-to-end way. Firstly, JAS-GAN investigates an adaptive attention cascade to automatically correlate the segmentation tasks of the unbalanced atrial targets. The adaptive attention cascade mainly models the inclusion relationship of the two unbalanced atrial targets, where the estimated LA acts as the attention map to adaptively focus on the small atrial scars roughly. Then, an adversarial regularization is applied to the segmentation tasks of the unbalanced atrial targets for making a consistent optimization. It mainly forces the estimated joint distribution of LA and atrial scars to match the real ones. We evaluated the performance of our JAS-GAN on a 3D LGE CMR dataset with 192 scans. Compared with state-of-the-art methods, our proposed approach yielded better segmentation performance (Average Dice Similarity Coefficient (DSC) values of 0.946 and 0.821 for LA and atrial scars, respectively), which indicated the effectiveness of our proposed approach for segmenting unbalanced atrial targets.
Yang G, Aviles-Rivero A, Roberts M, et al., 2022, Preface, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 13413 LNCS, Pages: v-vi, ISSN: 0302-9743
Ai R, Jin X, Tang B, et al., 2022, Aging and Alzheimer’s Disease: Application of Artificial Intelligence in Mechanistic Studies, Diagnosis, and Drug Development, Artificial Intelligence in Medicine, Pages: 1057-1072, ISBN: 9783030645724
Artificial intelligence (AI) implies the use of a machine with limited human interference to model intelligent actions. It covers a broad range of research studies from machine intelligence for computer vision, robotics, and natural language processing to more theoretical machine learning algorithm design and, recently, “deep learning” development. The application of AI in medical fields is booming, including the use of AI in data collection, analysis, mechanistic prediction, to clinical disease diagnosis and drug development. In this chapter, we focus on the challenges in the studies of aging and the age-predisposed Alzheimer’s disease (AD) and summarize on how to use AI to help addressing these questions. We finally provide future perspectives on the use of AI in aging research and AD.
Xing X, Huang J, Nan Y, et al., 2022, CS<inf>2</inf> : A Controllable and Simultaneous Synthesizer of Images and Annotations with Minimal Human Intervention, Pages: 3-12, ISSN: 0302-9743
The destitution of image data and corresponding expert annotations limit the training capacities of AI diagnostic models and potentially inhibit their performance. To address such a problem of data and label scarcity, generative models have been developed to augment the training datasets. Previously proposed generative models usually require manually adjusted annotations (e.g., segmentation masks) or need pre-labeling. However, studies have found that these pre-labeling based methods can induce hallucinating artifacts, which might mislead the downstream clinical tasks, while manual adjustment could be onerous and subjective. To avoid manual adjustment and pre-labeling, we propose a novel controllable and simultaneous synthesizer (dubbed CS$$^2$$ ) in this study to generate both realistic images and corresponding annotations at the same time. Our CS$$^2$$ model is trained and validated using high resolution CT (HRCT) data collected from COVID-19 patients to realize an efficient infections segmentation with minimal human intervention. Our contributions include 1) a conditional image synthesis network that receives both style information from reference CT images and structural information from unsupervised segmentation masks, and 2) a corresponding segmentation mask synthesis network to automatically segment these synthesized images simultaneously. Our experimental studies on HRCT scans collected from COVID-19 patients demonstrate that our CS$$^2$$ model can lead to realistic synthesized datasets and promising segmentation results of COVID infections compared to the state-of-the-art nnUNet trained and fine-tuned in a fully supervised manner.
Yang G, Lv J, Chen Y, et al., 2022, Generative Adversarial Network Powered Fast Magnetic Resonance Imaging—Comparative Study and New Perspectives, Intelligent Systems Reference Library, Pages: 305-339
Magnetic Resonance Imaging (MRI) is a vital component of medical imaging. When compared to other image modalities, it has advantages such as the absence of radiation, superior soft tissue contrast, and complementary multiple sequence information. However, one drawback of MRI is its comparatively slow scanning and reconstruction compared to other image modalities, limiting its usage in some clinical applications when imaging time is critical. Traditional compressive sensing based MRI (CS-MRI) reconstruction can speed up MRI acquisition, but suffers from a long iterative process and noise-induced artefacts. Recently, Deep Neural Networks (DNNs) have been used in sparse MRI reconstruction models to recreate relatively high-quality images from heavily undersampled k-space data, allowing for much faster MRI scanning. However, there are still some hurdles to tackle. For example, directly training DNNs based on L1/L2 distance to the target fully sampled images could result in blurry reconstruction because L1/L2 loss can only enforce overall image or patch similarity and does not take into account local information such as anatomical sharpness. It is also hard to preserve fine image details while maintaining a natural appearance. More recently, Generative Adversarial Networks (GAN) based methods are proposed to solve fast MRI with enhanced image perceptual quality. The encoder obtains a latent space for the undersampling image, and the image is reconstructed by the decoder using the GAN loss. In this chapter, we review the GAN powered fast MRI methods with a comparative study on various anatomical datasets to demonstrate the generalisability and robustness of this kind of fast MRI while providing future perspectives.
Liu X, Xu C, Rao S, et al., 2022, Physiologically personalized coronary blood flow model to improve the estimation of noninvasive fractional flow reserve, MEDICAL PHYSICS, Vol: 49, Pages: 583-597, ISSN: 0094-2405
- Author Web Link
- Cite
- Citations: 9
Xie C, Zhuang X-X, Niu Z, et al., 2022, Amelioration of Alzheimer's disease pathology by mitophagy inducers identified via machine learning and a cross-species workflow, NATURE BIOMEDICAL ENGINEERING, Vol: 6, Pages: 76-+, ISSN: 2157-846X
- Author Web Link
- Cite
- Citations: 79
Yang G, Ye Q, Xia J, 2022, Unbox the black-box for the medical explainable AI via multi-modal and multi-centre data fusion: a mini-review, two showcases and beyond, Information Fusion, Vol: 77, Pages: 29-52, ISSN: 1566-2535
Explainable Artificial Intelligence (XAI) is an emerging research topic of machine learning aimed at unboxing how AI systems’ black-boxchoices are made.This research field inspects the measures and models involved in decision-making and seeks solutions to explain them explicitly. Many of the machine learning algorithms can not manifest how and why a decision has been cast. This is particularly true of the most popular deep neural network approaches currently in use. Consequently, our confidence in AI systems can be hindered by the lack of explainability in these black-box models. The XAI becomes more and more cru-cial for deep learning powered applications, especially for medical and healthcare studies, although in general these deep neural networks can return an arresting dividend in performance. The insufficient explainability and transparency in most existing AI systems can be one of the major reasons that successful implemen-tation and integration of AI tools into routine clinical practice are uncommon. In this study, we first surveyed the current progress of XAI and in particular itsadvances in healthcare applications. We then introduced our solutions for XAI leveraging multi-modal and multi-centre data fusion, and subsequently validated in two showcases following real clinical scenarios. Comprehensive quantitative and qualitative analyses can prove the efficacy of our proposed XAI solutions, from which we can envisage successful applications in a broader range of clinical questions.
Wang S, Yang G, 2022, A Novel Automated Classification and Segmentation for COVID-19 using 3D CT Scans, 2022 IEEE 5TH INTERNATIONAL CONFERENCE ON IMAGE PROCESSING APPLICATIONS AND SYSTEMS, IPAS
Yeung M, Watts T, Yang G, 2022, From Astronomy to Histology: Adapting the FellWalker Algorithm to Deep Nuclear Instance Segmentation, 26th Annual Conference on Medical Image Understanding and Analysis (MIUA), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 547-561, ISSN: 0302-9743
Li H, Nan Y, Yang G, 2022, LKAU-Net: 3D Large-Kernel Attention-Based U-Net for Automatic MRI Brain Tumor Segmentation, 26th Annual Conference on Medical Image Understanding and Analysis (MIUA), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 313-327, ISSN: 0302-9743
- Author Web Link
- Cite
- Citations: 4
Yeung M, Rundo L, Sala E, et al., 2022, FOCAL ATTENTION NETWORKS: OPTIMISING ATTENTION FOR BIOMEDICAL IMAGE SEGMENTATION, 2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), ISSN: 1945-7928
Tanzer M, Ferreira P, Scott A, et al., 2022, Faster Diffusion Cardiac MRI with Deep Learning-Based Breath Hold Reduction, MEDICAL IMAGE UNDERSTANDING AND ANALYSIS, MIUA 2022, Vol: 13413, Pages: 101-115, ISSN: 0302-9743
- Author Web Link
- Cite
- Citations: 1
Tang Z, Yang N, Walsh S, et al., 2022, Adversarial Transformer for Repairing Human Airway Segmentation, arXiv preprint arXiv:2210.12029
Yang G, Rao A, Fernandez-Maloigne C, et al., 2022, EXPLAINABLE AI (XAI) IN BIOMEDICAL SIGNAL AND IMAGE PROCESSING: PROMISES AND CHALLENGES, 2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, Pages: 1531-1535, ISSN: 1522-4880
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.