Imperial College London

DrWenjiaBai

Faculty of MedicineDepartment of Brain Sciences

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 8291w.bai Website

 
 
//

Location

 

Room 212, Data Science InstituteWilliam Penney LaboratorySouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

165 results found

Meng Q, Bai W, O'Regan DP, Rueckert Det al., 2023, DeepMesh: mesh-based cardiac motion tracking using deep learning, IEEE Transactions on Medical Imaging, ISSN: 0278-0062

3D motion estimation from cine cardiac magnetic resonance (CMR) images is important for the assessment of cardiac function and the diagnosis of cardiovascular diseases. Current state-of-the art methods focus on estimating dense pixel-/voxel-wise motion fields in image space, which ignores the fact that motion estimation is only relevant and useful within the anatomical objects of interest, e.g., the heart. In this work, we model the heart as a 3D mesh consisting of epi- and endocardial surfaces. We propose a novel learning framework, DeepMesh, which propagates a template heart mesh to a subject space and estimates the 3D motion of the heart mesh from CMR images for individual subjects. In DeepMesh, the heart mesh of the end-diastolic frame of an individual subject is first reconstructed from the template mesh. Mesh-based 3D motion fields with respect to the end-diastolic frame are then estimated from 2D short- and long-axis CMR images. By developing a differentiable mesh-to-image rasterizer, DeepMesh is able to leverage 2D shape information from multiple anatomical views for 3D mesh reconstruction and mesh motion estimation. The proposed method estimates vertex-wise displacement and thus maintains vertex correspondences between time frames, which is important for the quantitative assessment of cardiac function across different subjects and populations. We evaluate DeepMesh on CMR images acquired from the UK Biobank. We focus on 3D motion estimation of the left ventricle in this work. Experimental results show that the proposed method quantitatively and qualitatively outperforms other image-based and mesh-based cardiac motion tracking methods.

Journal article

Curran L, Simoes Monteiro de Marvao A, Inglese P, McGurk K, Schiratti P-R, Clement A, Zheng S, Li S, Pua CJ, Shah M, Jafari M, Theotokis P, Buchan R, Jurgens S, Raphael C, Baksi A, Pantazis A, Halliday B, Pennell D, Bai W, Chin C, Tadros R, Bezzina C, Watkins H, Cook S, Prasad S, Ware J, O'Regan Det al., 2023, Genotype-phenotype taxonomy of hypertrophic cardiomyopathy, Circulation: Genomic and Precision Medicine, Vol: 16, Pages: 559-570, ISSN: 2574-8300

Background:Hypertrophic cardiomyopathy (HCM) is an important cause of sudden cardiac death associated with heterogeneous phenotypes but there is no systematic framework for classifying morphology or assessing associated risks. Here we quantitatively survey genotype-phenotype associations in HCM to derive a data-driven taxonomy of disease expression.Methods:We enrolled 436 HCM patients (median age 60 years; 28.8% women) with clinical, genetic and imaging data. Anindependent cohort of 60 HCM patients from Singapore (median age 59 years; 11% women) and a reference population from UK Biobank (n = 16,691, mean age 55 years; 52.5% women) were also recruited. We used machine learning to analyse the three-dimensional structure of the left ventricle from cardiac magnetic resonance imaging and build a tree-based classification of HCM phenotypes. Genotype and mortality risk distributions were projected on the tree.Results:Carriers of pathogenic or likely pathogenic variants (P/LP) for HCM had lower left ventricular mass, but greater basalseptal hypertrophy, with reduced lifespan (mean follow-up 9.9 years) compared to genotype negative individuals(hazard ratio: 2.66; 95% confidence interval [CI]: 1.42-4.96; P < 0.002). Four main phenotypic branches were identified using unsupervised learning of three-dimensional shape: 1) non-sarcomeric hypertrophy with co-existing hypertension; 2) diffuse and basal asymmetric hypertrophy associated with outflow tract obstruction; 3) isolated basal hypertrophy; 4) milder non-obstructive hypertrophy enriched for familial sarcomeric HCM (odds ratio for P/LP variants: 2.18 [95% CI: 1.93-2.28, P = 0.0001]). Polygenic risk for HCM was also associated with different patterns and degrees of disease expression. The model was generalisable to an independent cohort (trustworthiness M1: 0.86-0.88).Conclusions:We report a data-driven taxonomy of HCM for identifying groups of patients with similar morphology while preserving a continuum of disease severi

Journal article

Qiao M, Wang S, Qiu H, Marvao AD, O'Regan D, Rueckert D, Bai Wet al., 2023, CHeart: a conditional spatio-temporal generative model for cardiac anatomy, IEEE Transactions on Medical Imaging, ISSN: 0278-0062

Two key questions in cardiac image analysis are to assess the anatomy and motion of the heart from images; and to understand how they are associated with non-imaging clinical factors such as gender, age and diseases. While the first question can often be addressed by image segmentation and motion tracking algorithms, our capability to model and answer the second question is still limited. In this work, we propose a novel conditional generative model to describe the 4D spatio-temporal anatomy of the heart and its interaction with non-imaging clinical factors. The clinical factors are integrated as the conditions of the generative modelling, which allows us to investigate how these factors influence the cardiac anatomy. We evaluate the model performance in mainly two tasks, anatomical sequence completion and sequence generation. The model achieves high performance in anatomical sequence completion, comparable to or outperforming other state-of-the-art generative models. In terms of sequence generation, given clinical conditions, the model can generate realistic synthetic 4D sequential anatomies that share similar distributions with the real data. We will share the code and the trained generative model at https://github.com/MengyunQ/CHeart.

Journal article

Metzler AB, Nathvani R, Sharmanska V, Bai W, Muller E, Moulds S, Agyei-Asabere C, Adjei-Boadih D, Kyere-Gyeabour E, Tetteh JD, Owusu G, Agyei-Mensah S, Baumgartner J, Robinson BE, Arku RE, Ezzati Met al., 2023, Phenotyping urban built and natural environments with high-resolution satellite images and unsupervised deep learning, Science of the Total Environment, Vol: 893, Pages: 1-14, ISSN: 0048-9697

Cities in the developing world are expanding rapidly, and undergoing changes to their roads, buildings, vegetation, and other land use characteristics. Timely data are needed to ensure that urban change enhances health, wellbeing and sustainability. We present and evaluate a novel unsupervised deep clustering method to classify and characterise the complex and multidimensional built and natural environments of cities into interpretable clusters using high-resolution satellite images. We applied our approach to a high-resolution (0.3 m/pixel) satellite image of Accra, Ghana, one of the fastest growing cities in sub-Saharan Africa, and contextualised the results with demographic and environmental data that were not used for clustering. We show that clusters obtained solely from images capture distinct interpretable phenotypes of the urban natural (vegetation and water) and built (building count, size, density, and orientation; length and arrangement of roads) environment, and population, either as a unique defining characteristic (e.g., bodies of water or dense vegetation) or in combination (e.g., buildings surrounded by vegetation or sparsely populated areas intermixed with roads). Clusters that were based on a single defining characteristic were robust to the spatial scale of analysis and the choice of cluster number, whereas those based on a combination of characteristics changed based on scale and number of clusters. The results demonstrate that satellite data and unsupervised deep learning provide a cost-effective, interpretable and scalable approach for real-time tracking of sustainable urban development, especially where traditional environmental and demographic data are limited and infrequent.

Journal article

Shah M, Inacio M, Lu C, Schiratti P-R, Zheng S, Clement A, Simoes Monteiro de Marvao A, Bai W, King A, Ware J, Wilkins M, Mielke J, Elci E, Kryukov I, McGurk K, Bender C, Freitag D, O'Regan Det al., 2023, Environmental and genetic predictors of human cardiovascular ageing, Nature Communications, Vol: 14, Pages: 1-15, ISSN: 2041-1723

Cardiovascular ageing is a process that begins early in life and leads to a progressive change instructure and decline in function due to accumulated damage across diverse cell types, tissues andorgans contributing to multi-morbidity. Damaging biophysical, metabolic and immunological factors exceed endogenous repair mechanisms resulting in a pro-fibrotic state, cellular senescence andend-organ damage, however the genetic architecture of cardiovascular ageing is not known. Herewe use machine learning approaches to quantify cardiovascular age from image-derived traits ofvascular function, cardiac motion and myocardial fibrosis, as well as conduction traits from electrocardiograms, in 39,559 participants of UK Biobank. Cardiovascular ageing is found to be significantly associated with common or rare variants in genes regulating sarcomere homeostasis, myocardial immunomodulation, and tissue responses to biophysical stress. Ageing is accelerated bycardiometabolic risk factors and we also identify prescribed medications that are potential modifiersof ageing. Through large-scale modelling of ageing across multiple traits our results reveal insightsinto the mechanisms driving premature cardiovascular ageing and reveal potential molecular targetsto attenuate age-related processes.

Journal article

Matthews PM, Gupta D, Mittal D, Bai W, Scalfari A, Pollock KG, Sharma V, Hill Net al., 2023, The association between brain volume loss and disability in multiple sclerosis: A systematic review, MULTIPLE SCLEROSIS AND RELATED DISORDERS, Vol: 74, ISSN: 2211-0348

Journal article

Liu C, Cheng S, Chen C, Qiao M, Zhang W, Shah A, Bai W, Arcucci Ret al., 2023, M-FLAG: Medical Vision-Language Pre-training with Frozen Language Models and Latent Space Geometry Optimization, Pages: 637-647, ISSN: 0302-9743

Medical vision-language models enable co-learning and integrating features from medical imaging and clinical text. However, these models are not easy to train and the latent representation space can be complex. Here we propose a novel way for pre-training and regularising medical vision-language models. The proposed method, named Medical vision-language pre-training with Frozen language models and Latent spAce Geometry optimization (M-FLAG), leverages a frozen language model for training stability and efficiency and introduces a novel orthogonality loss to harmonize the latent space geometry. We demonstrate the potential of the pre-trained model on three downstream tasks: medical image classification, segmentation, and object detection. Extensive experiments across five public datasets demonstrate that M-FLAG significantly outperforms existing medical vision-language pre-training approaches and reduces the number of parameters by 78%. Notably, M-FLAG achieves outstanding performance on the segmentation task while using only 1% of the RSNA dataset, even outperforming ImageNet pre-trained models that have been fine-tuned using 100% of the data. The code can be found in https://github.com/cheliu-computation/M-FLAG-MICCAI2023.

Conference paper

Zhang W, Basaran B, Meng Q, Baugh M, Stelter J, Lung P, Patel U, Bai W, Karampinos D, Kainz Bet al., 2023, MoCoSR: Respiratory Motion Correction and Super-Resolution for 3D Abdominal MRI, Pages: 121-131, ISSN: 0302-9743

Abdominal MRI is critical for diagnosing a wide variety of diseases. However, due to respiratory motion and other organ motions, it is challenging to obtain motion-free and isotropic MRI for clinical diagnosis. Imaging patients with inflammatory bowel disease (IBD) can be especially problematic, owing to involuntary bowel movements and difficulties with long breath-holds during acquisition. Therefore, this paper proposes a deep adversarial super-resolution (SR) reconstruction approach to address the problem of multi-task degradation by utilizing cycle consistency in a staged reconstruction model. We leverage a low-resolution (LR) latent space for motion correction, followed by super-resolution reconstruction, compensating for imaging artefacts caused by respiratory motion and spontaneous bowel movements. This alleviates the need for semantic knowledge about the intestines and paired data. Both are examined through variations of our proposed approach and we compare them to conventional, model-based, and learning-based MC and SR methods. Learned image reconstruction approaches are believed to occasionally hide disease signs. We investigate this hypothesis by evaluating a downstream task, automatically scoring IBD in the area of the terminal ileum on the reconstructed images and show evidence that our method does not suffer a synthetic domain bias.

Conference paper

Qin C, Wang S, Chen C, Bai W, Rueckert Det al., 2023, Generative myocardial motion tracking via latent space exploration with biomechanics-informed prior, MEDICAL IMAGE ANALYSIS, Vol: 83, ISSN: 1361-8415

Journal article

Gatidis S, Kart T, Fischer M, Winzeck S, Glocker B, Bai W, Bülow R, Emmel C, Friedrich L, Kauczor H-U, Keil T, Kröncke T, Mayer P, Niendorf T, Peters A, Pischon T, Schaarschmidt BM, Schmidt B, Schulze MB, Umutle L, Völzke H, Küstner T, Bamberg F, Schölkopf B, Rueckert Det al., 2022, Better together: data harmonization and cross-study analysis of abdominal MRI data from UK biobank and the German national cohort., Investigative Radiology, Vol: 58, Pages: 346-354, ISSN: 0020-9996

OBJECTIVES: The UK Biobank (UKBB) and German National Cohort (NAKO) are among the largest cohort studies, capturing a wide range of health-related data from the general population, including comprehensive magnetic resonance imaging (MRI) examinations. The purpose of this study was to demonstrate how MRI data from these large-scale studies can be jointly analyzed and to derive comprehensive quantitative image-based phenotypes across the general adult population. MATERIALS AND METHODS: Image-derived features of abdominal organs (volumes of liver, spleen, kidneys, and pancreas; volumes of kidney hilum adipose tissue; and fat fractions of liver and pancreas) were extracted from T1-weighted Dixon MRI data of 17,996 participants of UKBB and NAKO based on quality-controlled deep learning generated organ segmentations. To enable valid cross-study analysis, we first analyzed the data generating process using methods of causal discovery. We subsequently harmonized data from UKBB and NAKO using the ComBat approach for batch effect correction. We finally performed quantile regression on harmonized data across studies providing quantitative models for the variation of image-derived features stratified for sex and dependent on age, height, and weight. RESULTS: Data from 8791 UKBB participants (49.9% female; age, 63 ± 7.5 years) and 9205 NAKO participants (49.1% female, age: 51.8 ± 11.4 years) were analyzed. Analysis of the data generating process revealed direct effects of age, sex, height, weight, and the data source (UKBB vs NAKO) on image-derived features. Correction of data source-related effects resulted in markedly improved alignment of image-derived features between UKBB and NAKO. Cross-study analysis on harmonized data revealed comprehensive quantitative models for the phenotypic variation of abdominal organs across the general adult population. CONCLUSIONS: Cross-study analysis of MRI data from UKBB and NAKO as proposed in this work can be helpful for futur

Journal article

Ouyang C, Chen C, Li S, Li Z, Qin C, Bai W, Rueckert Det al., 2022, Causality-inspired single-source domain generalization for medical image segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 1095-1106, ISSN: 0278-0062

Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.

Journal article

Kart T, Fischer M, Winzeck S, Glocker B, Bai W, Buelow R, Emmel C, Friedrich L, Kauczor H-U, Keil T, Kroencke T, Mayer P, Niendorf T, Peters A, Pischon T, Schaarschmidt BM, Schmidt B, Schulze MB, Umutle L, Voelzke H, Kuestner T, Bamberg F, Schoelkopf B, Rueckert D, Gatidis Set al., 2022, Automated imaging-based abdominal organ segmentation and quality control in 20,000 participants of the UK Biobank and German National Cohort Studies, SCIENTIFIC REPORTS, Vol: 12, ISSN: 2045-2322

Journal article

Chen C, Qin C, Ouyang C, Li Z, Wang S, Qiu H, Chen L, Tarroni G, Bai W, Rueckert Det al., 2022, Enhancing MR image segmentation with realistic adversarial data augmentation, Medical Image Analysis, Vol: 82, Pages: 1-15, ISSN: 1361-8415

The success of neural networks on medical image segmentation tasks typicallyrelies on large labeled datasets for model training. However, acquiring andmanually labeling a large medical image set is resource-intensive, expensive,and sometimes impractical due to data sharing and privacy issues. To addressthis challenge, we propose AdvChain, a generic adversarial data augmentationframework, aiming at improving both the diversity and effectiveness of trainingdata for medical image segmentation tasks. AdvChain augments data with dynamicdata augmentation, generating randomly chained photo-metric and geometrictransformations to resemble realistic yet challenging imaging variations toexpand training data. By jointly optimizing the data augmentation model and asegmentation network during training, challenging examples are generated toenhance network generalizability for the downstream task. The proposedadversarial data augmentation does not rely on generative networks and can beused as a plug-in module in general segmentation networks. It iscomputationally efficient and applicable for both low-shot supervised andsemi-supervised learning. We analyze and evaluate the method on two MR imagesegmentation tasks: cardiac segmentation and prostate segmentation with limitedlabeled data. Results show that the proposed approach can alleviate the needfor labeled data while improving model generalization ability, indicating itspractical value in medical imaging applications.

Journal article

Basaran B, Matthews PM, Bai W, 2022, New lesion segmentation for multiple sclerosis brain images with imaging and lesion-aware augmentation, Frontiers in Neuroscience, Vol: 16, ISSN: 1662-453X

Multiple sclerosis (MS) is an inflammatory and demyelinating neurological disease of the central nervous system. Image-based biomarkers, such as lesions defined on magnetic resonance imaging (MRI), play an important role in MS diagnosis and patient monitoring. The detection of newly formed lesions provides crucial information for assessing disease progression and treatment outcome. Here, we propose a deep learning-based pipeline for new MS lesion detection and segmentation, which is built upon the nnU-Net framework. In addition to conventional data augmentation, we employ imaging and lesion-aware data augmentation methods, axial subsampling and CarveMix, to generate diverse samples and improve segmentation performance. The proposed pipeline is evaluated on the MICCAI 2021 MS new lesion segmentation challenge (MSSEG-2) dataset. It achieves an average Dice score of 0.510 and F1 score of 0.552 on cases with new lesions, and an average false positive lesion number nFP of 0.036 and false positive lesion volume VFP of 0.192 mm3 on cases with no new lesions. Our method outperforms other participating methods in the challenge and several state-of-the-art network architectures.

Journal article

Basaran BD, Qiao M, Matthews P, Bai Wet al., 2022, Subject-specific lesion generation and pseudo-healthy synthesis for multiple sclerosis brain images, SASHIMI: Simulation and Synthesis in Medical Imaging, Publisher: Springer, Pages: 1-11, ISSN: 0302-9743

Understanding the intensity characteristics of brain lesions is key for defining image-based biomarkers in neurological studies and for predicting disease burden and outcome. In this work, we present a novel foreground-based generative method for modelling the local lesion characteristics that can both generate synthetic lesions on healthy images and synthesize subject-specific pseudo-healthy images from pathological images. Furthermore, the proposed method can be used as a data augmentation module to generate synthetic images for training brain image segmentation networks. Experiments on multiple sclerosis (MS) brain images acquired on magnetic resonance imaging (MRI) demonstrate that the proposed method can generate highly realistic pseudo-healthy and pseudo-pathological brain images. Data augmentation using the synthetic images improves the brain image segmentation performance compared to traditional data augmentation methods as well as a recent lesion-aware data augmentation technique, CarveMix. The code will be released at https://github.com/dogabasaran/lesion-synthesis.

Conference paper

Chen C, Li Z, Ouyang C, Sinclair M, Bai W, Rueckert Det al., 2022, MaxStyle: adversarial style composition for robust medical image segmentation, Medical Image Computing and Computer Assisted Interventions (MICCAI) 2022, Publisher: Springer, Pages: 151-161

Convolutional neural networks (CNNs) have achieved remarkable segmentationaccuracy on benchmark datasets where training and test sets are from the samedomain, yet their performance can degrade significantly on unseen domains,which hinders the deployment of CNNs in many clinical scenarios. Most existingworks improve model out-of-domain (OOD) robustness by collecting multi-domaindatasets for training, which is expensive and may not always be feasible due toprivacy and logistical issues. In this work, we focus on improving modelrobustness using a single-domain dataset only. We propose a novel dataaugmentation framework called MaxStyle, which maximizes the effectiveness ofstyle augmentation for model OOD performance. It attaches an auxiliarystyle-augmented image decoder to a segmentation network for robust featurelearning and data augmentation. Importantly, MaxStyle augments data withimproved image style diversity and hardness, by expanding the style space withnoise and searching for the worst-case style composition of latent features viaadversarial training. With extensive experiments on multiple public cardiac andprostate MR datasets, we demonstrate that MaxStyle leads to significantlyimproved out-of-distribution robustness against unseen corruptions as well ascommon distribution shifts across multiple, different, unseen sites and unknownimage sequences under both low- and high-training data settings. The code canbe found at https://github.com/cherise215/MaxStyle.

Conference paper

Francis C, Futschik M, Huang J, Bai W, Sargurupremraj M, Teumer A, Breteler M, Petretto E, SR HO A, Amouyel P, Engelter S, Bülow R, Völker U, Völzke H, Dörr M, Imtiaz M-A, Aziz A, Lohner V, Ware J, Debette S, Elliott P, Dehghan A, Matthews Pet al., 2022, Genome-wide associations of aortic distensibility suggest causality for aortic aneurysms and brain white matter hyperintensities, Nature Communications, Vol: 13, ISSN: 2041-1723

Aortic dimensions and distensibility are key risk factors for aortic aneurysms and dissections, as well as for other cardiovascular and cerebrovascular diseases. We present genome-wide associations of ascending and descending aortic distensibility and area derived from cardiac magnetic resonance imaging (MRI) data of up to 32,590 Caucasian individuals in UK Biobank. We identify 102 loci (including 27 novel associations) tagging genes related to cardiovascular development, extracellular matrix production, smooth muscle cell contraction and heritable aortic diseases. Functional analyses highlight four signalling pathways associated with aortic distensibility (TGF-, IGF, VEGF and PDGF). We identify distinct sex-specific associations with aortic traits. We develop co-expression networks associated with aortic traits and apply phenome-wide Mendelian randomization (MR-PheWAS), generating evidence for a causal role for aortic distensibility in development of aortic aneurysms. Multivariable MR suggests a causal relationship between aortic distensibility and cerebral white matter hyperintensities, mechanistically linking aortic traits and brain small vessel disease.

Journal article

Meng Q, Bai W, Liu T, Simoes Monteiro de Marvao A, O'Regan D, Rueckert Det al., 2022, MulViMotion: shape-aware 3D myocardial motion tracking from multi-view cardiac MRI, IEEE Transactions on Medical Imaging, Vol: 41, Pages: 1961-1974, ISSN: 0278-0062

Recovering the 3D motion of the heart from cine cardiac magnetic resonance (CMR) imaging enables the assessment of regional myocardial function and is important for understanding and analyzing cardiovascular disease. However, 3D cardiac motion estimation is challenging because the acquired cine CMR images are usually 2D slices which limit the accurate estimation of through-plane motion. To address this problem, we propose a novel multi-view motion estimation network (MulViMotion), which integrates 2D cine CMR images acquired in short-axis and long-axis planes to learn a consistent 3D motion field of the heart. In the proposed method, a hybrid 2D/3D network is built to generate dense 3D motion fields by learning fused representations from multi-view images. To ensure that the motion estimation is consistent in 3D, a shape regularization module is introduced during training, where shape information from multi-view images is exploited to provide weak supervision to 3D motion estimation. We extensively evaluate the proposed method on 2D cine CMR images from 580 subjects of the UK Biobank study for 3D motion tracking of the left ventricular myocardium. Experimental results show that the proposed method quantitatively and qualitatively outperforms competing methods.

Journal article

Mehta R, Filos A, Baid U, Sako C, McKinley R, Rebsamen M, Dätwyler K, Meier R, Radojewski P, Murugesan GK, Nalawade S, Ganesh C, Wagner B, Yu FF, Fei B, Madhuranthakam AJ, Maldjian JA, Daza L, Gómez C, Arbeláez P, Dai C, Wang S, Reynaud H, Mo Y, Angelini E, Guo Y, Bai W, Banerjee S, Pei L, Ak M, Rosas-González S, Zemmoura I, Tauber C, Vu MH, Nyholm T, Löfstedt T, Ballestar LM, Vilaplana V, McHugh H, Maso Talou G, Wang A, Patel J, Chang K, Hoebel K, Gidwani M, Arun N, Gupta S, Aggarwal M, Singh P, Gerstner ER, Kalpathy-Cramer J, Boutry N, Huard A, Vidyaratne L, Rahman MM, Iftekharuddin KM, Chazalon J, Puybareau E, Tochon G, Ma J, Cabezas M, Llado X, Oliver A, Valencia L, Valverde S, Amian M, Soltaninejad M, Myronenko A, Hatamizadeh A, Feng X, Dou Q, Tustison N, Meyer C, Shah NA, Talbar S, Weber M-A, Mahajan A, Jakab A, Wiest R, Fathallah-Shaykh HM, Nazeri A, Milchenko M, Marcus D, Kotrotsou A, Colen R, Freymann J, Kirby J, Davatzikos C, Menze B, Bakas S, Gal Y, Arbel Tet al., 2022, QU-BraTS: MICCAI BraTS 2020 Challenge on Quantifying Uncertainty in Brain Tumor Segmentation - Analysis of Ranking Scores and Benchmarking Results., J Mach Learn Biomed Imaging, Vol: 2022

Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at https://github.com/RagMeh11/QU-BraTS.

Journal article

Wang Y, Blackie L, Miguel-Aliaga I, Bai Wet al., 2022, Memory-efficient segmentation of high-resolution volumetric MicroCTimages, Publisher: ArXiv

In recent years, 3D convolutional neural networks have become the dominantapproach for volumetric medical image segmentation. However, compared to their2D counterparts, 3D networks introduce substantially more training parametersand higher requirement for the GPU memory. This has become a major limitingfactor for designing and training 3D networks for high-resolution volumetricimages. In this work, we propose a novel memory-efficient network architecturefor 3D high-resolution image segmentation. The network incorporates both globaland local features via a two-stage U-net-based cascaded framework and at thefirst stage, a memory-efficient U-net (meU-net) is developed. The featureslearnt at the two stages are connected via post-concatenation, which furtherimproves the information flow. The proposed segmentation method is evaluated onan ultra high-resolution microCT dataset with typically 250 million voxels pervolume. Experiments show that it outperforms state-of-the-art 3D segmentationmethods in terms of both segmentation accuracy and memory efficiency.

Working paper

Thanaj M, Mielke J, McGurk K, Bai W, Savioli N, Simoes Monteiro de Marvao A, Meyer H, Zeng L, Sohler F, Lumbers T, Wilkins M, Ware J, Bender C, Rueckert D, MacNamara A, Freitag D, O'Regan Det al., 2022, Genetic and environmental determinants of diastolic heart function, Nature Cardiovascular Research, Vol: 1, Pages: 361-371, ISSN: 2731-0590

Diastole is the sequence of physiological events that occur in the heart during ventricular filling and principally depends onmyocardial relaxation and chamber stiffness. Abnormal diastolic function is related to many cardiovascular disease processesand is predictive of health outcomes, but its genetic architecture is largely unknown. Here, we use machine learning cardiacmotion analysis to measure diastolic functional traits in 39,559 participants of the UK Biobank and perform a genome-wideassociation study. We identified 9 significant, independent loci near genes that are associated with maintaining sarcomericfunction under biomechanical stress and genes implicated in the development of cardiomyopathy. Age, sex and diabetes wereindependent predictors of diastolic function and we found a causal relationship between genetically-determined ventricularstiffness and incident heart failure. Our results provide insights into the genetic and environmental factors influencing diastolicfunction that are relevant for identifying causal relationships and potential tractable targets.

Journal article

Zhang D, Barbot A, Seichepine F, Lo FP-W, Bai W, Yang G-Z, Lo Bet al., 2022, Micro-object pose estimation with sim-to-real transfer learning using small dataset, Communications Physics, Vol: 5, ISSN: 2399-3650

Journal article

Davies RH, Augusto JB, Bhuva A, Xue H, Treibel TA, Ye Y, Hughes RK, Bai W, Lau C, Shiwani H, Fontana M, Kozor R, Herrey A, Lopes LR, Maestrini V, Rosmini S, Petersen SE, Kellman P, Rueckert D, Greenwood JP, Captur G, Manisty C, Schelbert E, Moon JCet al., 2022, Precision measurement of cardiac structure and function in cardiovascular magnetic resonance using machine learning, Journal of Cardiovascular Magnetic Resonance, Vol: 24, ISSN: 1097-6647

BackgroundMeasurement of cardiac structure and function from images (e.g. volumes, mass and derived parameters such as left ventricular (LV) ejection fraction [LVEF]) guides care for millions. This is best assessed using cardiovascular magnetic resonance (CMR), but image analysis is currently performed by individual clinicians, which introduces error. We sought to develop a machine learning algorithm for volumetric analysis of CMR images with demonstrably better precision than human analysis.MethodsA fully automated machine learning algorithm was trained on 1923 scans (10 scanner models, 13 institutions, 9 clinical conditions, 60,000 contours) and used to segment the LV blood volume and myocardium. Performance was quantified by measuring precision on an independent multi-site validation dataset with multiple pathologies with n = 109 patients, scanned twice. This dataset was augmented with a further 1277 patients scanned as part of routine clinical care to allow qualitative assessment of generalization ability by identifying mis-segmentations. Machine learning algorithm (‘machine’) performance was compared to three clinicians (‘human’) and a commercial tool (cvi42, Circle Cardiovascular Imaging).FindingsMachine analysis was quicker (20 s per patient) than human (13 min). Overall machine mis-segmentation rate was 1 in 479 images for the combined dataset, occurring mostly in rare pathologies not encountered in training. Without correcting these mis-segmentations, machine analysis had superior precision to three clinicians (e.g. scan-rescan coefficients of variation of human vs machine: LVEF 6.0% vs 4.2%, LV mass 4.8% vs. 3.6%; both P < 0.05), translating to a 46% reduction in required trial sample size using an LVEF endpoint.ConclusionWe present a fully automated algorithm for measuring LV structure and global systolic function that betters human performance for speed and precision.

Journal article

Meng Q, Bai W, Liu T, Simoes Monteiro de Marvao A, O'Regan D, Rueckert Det al., 2022, Multiview Motion Estimation for 3D cardiac motion tracking

Code for paper ''MulViMotion: Shape-aware 3D Myocardial Motion Tracking from Multi-View Cardiac MRI''

Software

Dai C, Wang S, Mo Y, Angelini E, Guo Y, Bai Wet al., 2022, Suggestive annotation of brain MR images with gradient-guided sampling, Medical Image Analysis, Vol: 77, Pages: 1-12, ISSN: 1361-8415

Machine learning has been widely adopted for medical image analysis in recent years given its promising performance in image segmentation and classification tasks. The success of machine learning, in particular supervised learning, depends on the availability of manually annotated datasets. For medical imaging applications, such annotated datasets are not easy to acquire, it takes a substantial amount of time and resource to curate an annotated medical image set. In this paper, we propose an efficient annotation framework for brain MR images that can suggest informative sample images for human experts to annotate. We evaluate the framework on two different brain image analysis tasks, namely brain tumour segmentation and whole brain segmentation. Experiments show that for brain tumour segmentation task on the BraTS 2019 dataset, training a segmentation model with only 7% suggestively annotated image samples can achieve a performance comparable to that of training on the full dataset. For whole brain segmentation on the MALC dataset, training with 42% suggestively annotated image samples can achieve a comparable performance to training on the full dataset. The proposed framework demonstrates a promising way to save manual annotation cost and improve data efficiency in medical imaging applications.

Journal article

Wang Y, Blackie L, Miguel-Aliaga I, Bai Wet al., 2022, Memory-efficient Segmentation of High-resolution Volumetric MicroCT Images, Pages: 1322-1335

In recent years, 3D convolutional neural networks have become the dominant approach for volumetric medical image segmentation. However, compared to their 2D counterparts, 3D networks introduce substantially more training parameters and higher requirement for the GPU memory. This has become a major limiting factor for designing and training 3D networks for high-resolution volumetric images. In this work, we propose a novel memory-efficient network architecture for 3D high-resolution image segmentation. The network incorporates both global and local features via a two-stage U-net-based cascaded framework and at the first stage, a memory-efficient U-net (meU-net) is developed. The features learnt at the two stages are connected via post-concatenation, which further improves the information flow. The proposed segmentation method is evaluated on an ultra high-resolution microCT dataset with typically 250 million voxels per volume. Experiments show that it outperforms state-of-the-art 3D segmentation methods in terms of both segmentation accuracy and memory efficiency.

Conference paper

Ouyang C, Wang S, Chen C, Li Z, Bai W, Kainz B, Rueckert Det al., 2022, Improved Post-hoc Probability Calibration for Out-of-Domain MRI Segmentation, 4th International Workshop on Uncertainty for Safe Utilization of Machine Learning in Medical Imaging (UNSURE), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 59-69, ISSN: 0302-9743

Conference paper

Zaydullin R, Bharath AA, Grisan E, Christensen-Jeffries K, Bai W, Tang M-Xet al., 2022, Motion Correction Using Deep Learning Neural Networks - Effects of Data Representation, IEEE International Ultrasonics Symposium (IUS), Publisher: IEEE, ISSN: 1948-5719

Conference paper

Qiao M, Basaran BD, Qiu H, Wang S, Guo Y, Wang Y, Matthews PM, Rueckert D, Bai Wet al., 2022, Generative Modelling of the Ageing Heart with Cross-Sectional Imaging and Clinical Data, 13th International Workshop on Statistical Atlases and Computational Modelling of the Heart (STACOM), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 3-12, ISSN: 0302-9743

Conference paper

Meng Q, Bai W, Liu T, O'Regan DP, Rueckert Det al., 2022, Mesh-Based 3D Motion Tracking in Cardiac MRI Using Deep Learning, 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 248-258, ISSN: 0302-9743

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00484405&limit=30&person=true