722 results found
Rueckert D, Schnabel JA, 2020, Model-Based and Data-Driven Strategies in Medical Image Computing, Proceedings of the IEEE, Vol: 108, Pages: 110-124, ISSN: 0018-9219
© 2019 IEEE. Model-based approaches for image reconstruction, analysis, and interpretation have made significant progress over the past decades. Many of these approaches are based on either mathematical, physical, or biological models. A challenge for these approaches is the modeling of the underlying processes (e.g., the physics of image acquisition or the patho-physiology of a disease) with appropriate levels of detail and realism. With the availability of large amounts of imaging data and machine learning (in particular deep learning) techniques, data-driven approaches have become more widespread for use in different tasks in reconstruction, analysis, and interpretation. These approaches learn statistical models directly from labeled or unlabeled image data and have been shown to be very powerful for extracting clinically useful information from medical imaging. While these data-driven approaches often outperform traditional model-based approaches, their clinical deployment often poses challenges in terms of robustness, generalization ability, and interpretability. In this article, we discuss what developments have motivated the shift from model-based approaches toward data-driven strategies and what potential problems are associated with the move toward purely data-driven approaches, in particular deep learning. We also discuss some of the open challenges for data-driven approaches, e.g., generalization to new unseen data (e.g., transfer learning), robustness to adversarial attacks, and interpretability. Finally, we conclude with a discussion on how these approaches may lead to the development of more closely coupled imaging pipelines that are optimized in an end-to-end fashion.
Jokinen H, Koikkalainen J, Laakso HM, et al., 2020, Global Burden of Small Vessel Disease-Related Brain Changes on MRI Predicts Cognitive and Functional Decline, STROKE, Vol: 51, Pages: 170-178, ISSN: 0039-2499
Rachmadi MF, Valdés-Hernández MDC, Li H, et al., 2020, Limited One-time Sampling Irregularity Map (LOTS-IM) for Automatic Unsupervised Assessment of White Matter Hyperintensities and Multiple Sclerosis Lesions in Structural Brain Magnetic Resonance Images., Comput Med Imaging Graph, Vol: 79
We present the application of limited one-time sampling irregularity map (LOTS-IM): a fully automatic unsupervised approach to extract brain tissue irregularities in magnetic resonance images (MRI), for quantitatively assessing white matter hyperintensities (WMH) of presumed vascular origin, and multiple sclerosis (MS) lesions and their progression. LOTS-IM generates an irregularity map (IM) that represents all voxels as irregularity values with respect to the ones considered "normal". Unlike probability values, IM represents both regular and irregular regions in the brain based on the original MRI's texture information. We evaluated and compared the use of IM for WMH and MS lesions segmentation on T2-FLAIR MRI with the state-of-the-art unsupervised lesions' segmentation method, Lesion Growth Algorithm from the public toolbox Lesion Segmentation Toolbox (LST-LGA), with several well established conventional supervised machine learning schemes and with state-of-the-art supervised deep learning methods for WMH segmentation. In our experiments, LOTS-IM outperformed unsupervised method LST-LGA on WMH segmentation, both in performance and processing speed, thanks to the limited one-time sampling scheme and its implementation on GPU. Our method also outperformed supervised conventional machine learning algorithms (i.e., support vector machine (SVM) and random forest (RF)) and deep learning algorithms (i.e., deep Boltzmann machine (DBM) and convolutional encoder network (CEN)), while yielding comparable results to the convolutional neural network schemes that rank top of the algorithms developed up to date for this purpose (i.e., UResNet and UNet). LOTS-IM also performed well on MS lesions segmentation, performing similar to LST-LGA. On the other hand, the high sensitivity of IM on depicting signal change deems suitable for assessing MS progression, although care must be taken with signal changes not reflective of a true pathology.
Biffi C, Cerrolaza Martinez JJ, Tarroni G, et al., Explainable Anatomical Shape Analysis through Deep Hierarchical Generative Models, IEEE Transactions on Medical Imaging, ISSN: 0278-0062
Chen L, Lobotesis K, Rueckert D, et al., 2019, Timing an ischaemic stroke with just plain CT (and a little deep learning), Publisher: SAGE PUBLICATIONS LTD, Pages: 29-29, ISSN: 1747-4930
Chen L, Bentley P, Mori K, et al., 2019, Self-supervised learning for medical image analysis using image context restoration., Medical Image Analysis, Vol: 58, Pages: 1-12, ISSN: 1361-8415
Machine learning, particularly deep learning has boosted medical image analysis over the past years. Training a good model based on deep learning requires large amount of labelled data. However, it is often difficult to obtain a sufficient number of labelled images for training. In many scenarios the dataset in question consists of more unlabelled images than labelled ones. Therefore, boosting the performance of machine learning models by using unlabelled as well as labelled data is an important but challenging problem. Self-supervised learning presents one possible solution to this problem. However, existing self-supervised learning strategies applicable to medical images cannot result in significant performance improvement. Therefore, they often lead to only marginal improvements. In this paper, we propose a novel self-supervised learning strategy based on context restoration in order to better exploit unlabelled images. The context restoration strategy has three major features: 1) it learns semantic image features; 2) these image features are useful for different types of subsequent image analysis tasks; and 3) its implementation is simple. We validate the context restoration strategy in three common problems in medical imaging: classification, localization, and segmentation. For classification, we apply and test it to scan plane detection in fetal 2D ultrasound images; to localise abdominal organs in CT images; and to segment brain tumours in multi-modal MR images. In all three cases, self-supervised learning based on context restoration learns useful semantic features and lead to improved machine learning models for the above tasks.
Jaubert O, Cruz G, Bustin A, et al., 2019, Water-fat Dixon cardiac magnetic resonance fingerprinting, MAGNETIC RESONANCE IN MEDICINE, ISSN: 0740-3194
Leiner T, Rueckert D, Suinesiaputra A, et al., 2019, Machine learning in cardiovascular magnetic resonance: basic concepts and applications, JOURNAL OF CARDIOVASCULAR MAGNETIC RESONANCE, Vol: 21, ISSN: 1097-6647
Ktena SI, Schirmer MD, Etherton MR, et al., 2019, Brain Connectivity Measures Improve Modeling of Functional Outcome After Acute Ischemic Stroke, STROKE, Vol: 50, Pages: 2761-2767, ISSN: 0039-2499
Balaban G, Halliday BP, Bai W, et al., 2019, Scar shape analysis and simulated electrical instabilities in a non-ischemic dilated cardiomyopathy patient cohort., PLoS Computational Biology, Vol: 15, Pages: 1-18, ISSN: 1553-734X
This paper presents a morphological analysis of fibrotic scarring in non-ischemic dilated cardiomyopathy, and its relationship to electrical instabilities which underlie reentrant arrhythmias. Two dimensional electrophysiological simulation models were constructed from a set of 699 late gadolinium enhanced cardiac magnetic resonance images originating from 157 patients. Areas of late gadolinium enhancement (LGE) in each image were assigned one of 10 possible microstructures, which modelled the details of fibrotic scarring an order of magnitude below the MRI scan resolution. A simulated programmed electrical stimulation protocol tested each model for the possibility of generating either a transmural block or a transmural reentry. The outcomes of the simulations were compared against morphological LGE features extracted from the images. Models which blocked or reentered, grouped by microstructure, were significantly different from one another in myocardial-LGE interface length, number of components and entropy, but not in relative area and transmurality. With an unknown microstructure, transmurality alone was the best predictor of block, whereas a combination of interface length, transmurality and number of components was the best predictor of reentry in linear discriminant analysis.
Bhuva A, Bai W, Lau C, et al., 2019, A Multicenter, Scan-Rescan, Human and Machine Learning CMR Study to Test Generalizability and Precision in Imaging Biomarker Analysis., Circ Cardiovasc Imaging, Vol: 12
BACKGROUND: Automated analysis of cardiac structure and function using machine learning (ML) has great potential, but is currently hindered by poor generalizability. Comparison is traditionally against clinicians as a reference, ignoring inherent human inter- and intraobserver error, and ensuring that ML cannot demonstrate superiority. Measuring precision (scan:rescan reproducibility) addresses this. We compared precision of ML and humans using a multicenter, multi-disease, scan:rescan cardiovascular magnetic resonance data set. METHODS: One hundred ten patients (5 disease categories, 5 institutions, 2 scanner manufacturers, and 2 field strengths) underwent scan:rescan cardiovascular magnetic resonance (96% within one week). After identification of the most precise human technique, left ventricular chamber volumes, mass, and ejection fraction were measured by an expert, a trained junior clinician, and a fully automated convolutional neural network trained on 599 independent multicenter disease cases. Scan:rescan coefficient of variation and 1000 bootstrapped 95% CIs were calculated and compared using mixed linear effects models. RESULTS: Clinicians can be confident in detecting a 9% change in left ventricular ejection fraction, with greater than half of coefficient of variation attributable to intraobserver variation. Expert, trained junior, and automated scan:rescan precision were similar (for left ventricular ejection fraction, coefficient of variation 6.1 [5.2%-7.1%], P=0.2581; 8.3 [5.6%-10.3%], P=0.3653; 8.8 [6.1%-11.1%], P=0.8620). Automated analysis was 186× faster than humans (0.07 versus 13 minutes). CONCLUSIONS: Automated ML analysis is faster with similar precision to the most precise human techniques, even when challenged with real-world scan:rescan data. Assessment of multicenter, multi-vendor, multi-field strength scan:rescan data (available at www.thevolumesresource.com) permits a generalizable assessment of ML precision and may facilitate direct
Steyerberg EW, Wiegers E, Sewalt C, et al., 2019, Case-mix, care pathways, and outcomes in patients with traumatic brain injury in CENTER-TBI: a European prospective, multicentre, longitudinal, cohort study., Lancet Neurol, Vol: 18, Pages: 923-934
BACKGROUND: The burden of traumatic brain injury (TBI) poses a large public health and societal problem, but the characteristics of patients and their care pathways in Europe are poorly understood. We aimed to characterise patient case-mix, care pathways, and outcomes of TBI. METHODS: CENTER-TBI is a Europe-based, observational cohort study, consisting of a core study and a registry. Inclusion criteria for the core study were a clinical diagnosis of TBI, presentation fewer than 24 h after injury, and an indication for CT. Patients were differentiated by care pathway and assigned to the emergency room (ER) stratum (patients who were discharged from an emergency room), admission stratum (patients who were admitted to a hospital ward), or intensive care unit (ICU) stratum (patients who were admitted to the ICU). Neuroimages and biospecimens were stored in repositories and outcome was assessed at 6 months after injury. We used the IMPACT core model for estimating the expected mortality and proportion with unfavourable Glasgow Outcome Scale Extended (GOSE) outcomes in patients with moderate or severe TBI (Glasgow Coma Scale [GCS] score ≤12). The core study was registered with ClinicalTrials.gov, number NCT02210221, and with Resource Identification Portal (RRID: SCR_015582). FINDINGS: Data from 4509 patients from 18 countries, collected between Dec 9, 2014, and Dec 17, 2017, were analysed in the core study and from 22 782 patients in the registry. In the core study, 848 (19%) patients were in the ER stratum, 1523 (34%) in the admission stratum, and 2138 (47%) in the ICU stratum. In the ICU stratum, 720 (36%) patients had mild TBI (GCS score 13-15). Compared with the core cohort, the registry had a higher proportion of patients in the ER (9839 [43%]) and admission (8571 [38%]) strata, with more than 95% of patients classified as having mild TBI. Patients in the core study were older than those in previous studies (median age 50 years [IQR 30-66], 1254 [28%] aged >65
Monteiro M, Kamnitsas K, Ferrante E, et al., 2019, TBI lesion segmentation in head CT: impact of preprocessing and data augmentation, MICCAI Brain Lesion Workshop, Publisher: Springer Verlag, ISSN: 0302-9743
Automatic segmentation of lesions in head CT provides keyinformation for patient management, prognosis and disease monitoring.Despite its clinical importance, method development has mostly focusedon multi-parametric MRI. Analysis of the brain in CT is challengingdue to limited soft tissue contrast and its mono-modal nature. We studythe under-explored problem of fine-grained CT segmentation of multiplelesion types (core, blood, oedema) in traumatic brain injury (TBI). Weobserve that preprocessing and data augmentation choices greatly impactthe segmentation accuracy of a neural network, yet these factors arerarely thoroughly assessed in prior work. We design an empirical studythat extensively evaluates the impact of different data preprocessing andaugmentation methods. We show that these choices can have an impactof up to 18% DSC. We conclude that resampling to isotropic resolutionyields improved performance, skull-stripping can be replaced by using theright intensity window, and affine-to-atlas registration is not necessaryif we use sufficient spatial augmentation. Since both skull-stripping andaffine-to-atlas registration are susceptible to failure, we recommend theiralternatives to be used in practice. We believe this is the first work toreport results for fine-grained multi-class segmentation of TBI in CT. Ourfindings may inform further research in this under-explored yet clinicallyimportant task of automatic head CT lesion segmentation.
Duan J, Bello G, Schlemper J, et al., 2019, Automatic 3D bi-ventricular segmentation of cardiac images by a shape-refined multi-task deep learning approach, IEEE Transactions on Medical Imaging, Vol: 38, Pages: 2151-2164, ISSN: 0278-0062
Deep learning approaches have achieved state-of-the-art performance incardiac magnetic resonance (CMR) image segmentation. However, most approaches have focused on learning image intensity features for segmentation, whereas the incorporation of anatomical shape priors has received less attention. In this paper, we combine a multi-task deep learning approach with atlas propagation to develop a shape-constrained bi-ventricular segmentation pipeline for short-axis CMR volumetric images. The pipeline first employs a fully convolutional network (FCN) that learns segmentation and landmark localisation tasks simultaneously. The architecture of the proposed FCN uses a 2.5D representation, thus combining the computational advantage of 2D FCNs networks and the capability of addressing 3D spatial consistency without compromising segmentation accuracy. Moreover, the refinement step is designed to explicitly enforce a shape constraint and improve segmentation quality. This step is effective for overcoming image artefacts (e.g. due to different breath-hold positions and large slice thickness), which preclude the creation of anatomically meaningful 3D cardiac shapes. The proposed pipeline is fully automated, due to network's ability to infer landmarks, which are then used downstream in the pipeline to initialise atlas propagation. We validate the pipeline on 1831 healthy subjects and 649 subjects with pulmonary hypertension. Extensive numerical experiments on the two datasets demonstrate that our proposed method is robust and capable of producing accurate, high-resolution and anatomically smooth bi-ventricular3D models, despite the artefacts in input CMR volumes.
Cerrolaza JJ, Lopez Picazo M, Humbert L, et al., 2019, Computational anatomy for multi-organ analysis in medical imaging: A review, MEDICAL IMAGE ANALYSIS, Vol: 56, Pages: 44-67, ISSN: 1361-8415
Xavier IRR, Giraldi GA, Gibson SJ, et al., 2019, Age-related craniofacial differences based on spatio-temporal face image atlases, IET IMAGE PROCESSING, Vol: 13, Pages: 1561-1568, ISSN: 1751-9659
Biffi C, Cerrolaza JJ, Tarroni G, et al., 2019, 3D high-resolution cardiac segmentation reconstruction from 2D views using conditional variational autoencoders, 16th IEEE International Symposium on Biomedical Imaging (ISBI), Publisher: IEEE, Pages: 1643-1646, ISSN: 1945-7928
Accurate segmentation of heart structures imaged by cardiac MR is key for the quantitative analysis of pathology. High-resolution 3D MR sequences enable whole-heart structural imaging but are time-consuming, expensive to acquire and they often require long breath holds that are not suitable for patients. Consequently, multiplanar breath-hold 2D cines sequences are standard practice but are disadvantaged by lack of whole-heart coverage and low through-plane resolution. To address this, we propose a conditional variational autoencoder architecture able to learn a generative model of 3D high-resolution left ventricular (LV) segmentations which is conditioned on three 2D LV segmentations of one short-axis and two long-axis images. By only employing these three 2D segmentations, our model can efficiently reconstruct the 3D high-resolution LV segmentation of a subject. When evaluated on 400 unseen healthy volunteers, our model yielded an average Dice score of 87.92 ± 0.15 and outperformed competing architectures (TL-net, Dice score = 82.60 ± 0.23, p = 2.2 · 10 -16 ).
Bai W, Chen C, Tarroni G, et al., 2019, Self-supervised learning for cardiac MR image segmentation by anatomicalposition prediction, Publisher: arXiv
In the recent years, convolutional neural networks have transformed the fieldof medical image analysis due to their capacity to learn discriminative imagefeatures for a variety of classification and regression tasks. However,successfully learning these features requires a large amount of manuallyannotated data, which is expensive to acquire and limited by the availableresources of expert image analysts. Therefore, unsupervised, weakly-supervisedand self-supervised feature learning techniques receive a lot of attention,which aim to utilise the vast amount of available data, while at the same timeavoid or substantially reduce the effort of manual annotation. In this paper,we propose a novel way for training a cardiac MR image segmentation network, inwhich features are learnt in a self-supervised manner by predicting anatomicalpositions. The anatomical positions serve as a supervisory signal and do notrequire extra manual annotation. We demonstrate that this seemingly simple taskprovides a strong signal for feature learning and with self-supervisedlearning, we achieve a high segmentation accuracy that is better than orcomparable to a U-net trained from scratch, especially at a small data setting.When only five annotated subjects are available, the proposed method improvesthe mean Dice metric from 0.811 to 0.852 for short-axis image segmentation,compared to the baseline U-net.
Bhuva AN, Treibel TA, De Marvao A, et al., 2019, Sex and regional differences in myocardial plasticity in aortic stenosis are revealed by 3D model machine learning., Eur Heart J Cardiovasc Imaging
AIMS: Left ventricular hypertrophy (LVH) in aortic stenosis (AS) varies widely before and after aortic valve replacement (AVR), and deeper phenotyping beyond traditional global measures may improve risk stratification. We hypothesized that machine learning derived 3D LV models may provide a more sensitive assessment of remodelling and sex-related differences in AS than conventional measurements. METHODS AND RESULTS: One hundred and sixteen patients with severe, symptomatic AS (54% male, 70 ± 10 years) underwent cardiovascular magnetic resonance pre-AVR and 1 year post-AVR. Computational analysis produced co-registered 3D models of wall thickness, which were compared with 40 propensity-matched healthy controls. Preoperative regional wall thickness and post-operative percentage wall thickness regression were analysed, stratified by sex. AS hypertrophy and regression post-AVR was non-uniform-greatest in the septum with more pronounced changes in males than females (wall thickness regression: -13 ± 3.6 vs. -6 ± 1.9%, respectively, P < 0.05). Even patients without LVH (16% with normal indexed LV mass, 79% female) had greater septal and inferior wall thickness compared with controls (8.8 ± 1.6 vs. 6.6 ± 1.2 mm, P < 0.05), which regressed post-AVR. These differences were not detectable by global measures of remodelling. Changes to clinical parameters post-AVR were also greater in males: N-terminal pro-brain natriuretic peptide (NT-proBNP) [-37 (interquartile range -88 to -2) vs. -1 (-24 to 11) ng/L, P = 0.008], and systolic blood pressure (12.9 ± 23 vs. 2.1 ± 17 mmHg, P = 0.009), with changes in NT-proBNP correlating with percentage LV mass regression in males only (ß 0.32, P = 0.02). CONCLUSION: In patients with severe AS, inc
Oksuz I, Ruijsink B, Puyol-Anton E, et al., 2019, Automatic CNN-based detection of cardiac MR motion artefacts using k-space data augmentation and curriculum learning, MEDICAL IMAGE ANALYSIS, Vol: 55, Pages: 136-147, ISSN: 1361-8415
Tournier J-D, Christiaens D, Hutter J, et al., 2019, A data-driven approach to optimising the encoding for multi-shell diffusion MRI with application to neonatal imaging
<jats:title>Abstract</jats:title><jats:p>Diffusion MRI has the potential to provide important information about the connectivity and microstructure of the human brain during normal and abnormal development, non-invasively and in vivo. Recent developments in MRI hardware and reconstruction methods now permit the acquisition of large amounts of data within relatively short scan times. This makes it possible to acquire more informative multi-shell data, with diffusion-sensitisation applied along many directions over multiple <jats:italic>b</jats:italic>-value shells. Such schemes are characterised by the number of shells acquired, and the specific <jats:italic>b</jats:italic>-value and number of directions sampled for each shell. However, there is currently no clear consensus as to how to optimise these parameters. In this work, we propose a means of optimising multi-shell acquisition schemes by estimating the information content of the diffusion MRI signal, and optimising the acquisition parameters for sensitivity to the observed effects, in a manner agnostic to any particular diffusion analysis method that might subsequently be applied to the data. This method was used to design the acquisition scheme for the neonatal diffusion MRI sequence used in the developing Human Connectome Project, which aims to acquire high quality data and make it freely available to the research community. The final protocol selected by the algorithm, and currently in use within the dHCP, consists of <jats:italic>b =</jats:italic> 0, 400, 1000, 2600 s/mm<jats:sup>2</jats:sup> with 20, 64, 88 & 128 DW directions per shell respectively.</jats:p><jats:sec><jats:title>Highlights</jats:title><jats:list list-type="bullet"><jats:list-item><jats:p>A data driven method is presented to design multi-shell diffusion MRI acquisition schemes (<jats:italic>b</jats:italic&g
Bhuva A, Bai W, Lau C, et al., 2019, Fully automated left ventricular analysis matches clinician precision: a multi-centre, multi-vendor, multi-field strength, multi-disease scan:rescan CMR study, Publisher: OXFORD UNIV PRESS, Pages: 255-256, ISSN: 2047-2404
Attard M, Dawes T, Simoes Monteiro de Marvao A, et al., 2019, Metabolic pathways associated with right ventricular adaptation to pulmonary hypertension: Three dimensional analysis of cardiac magnetic resonance imaging, EHJ Cardiovascular Imaging / European Heart Journal - Cardiovascular Imaging, Vol: 20, Pages: 668-676, ISSN: 2047-2412
AimsWe sought to identify metabolic pathways associated with right ventricular (RV) adaptation to pulmonary hypertension (PH). We evaluated candidate metabolites, previously associated with survival in pulmonary arterial hypertension, and used automated image segmentation and parametric mapping to model their relationship to adverse patterns of remodelling and wall stress.Methods and resultsIn 312 PH subjects (47.1% female, mean age 60.8 ± 15.9 years), of which 182 (50.5% female, mean age 58.6 ± 16.8 years) had metabolomics, we modelled the relationship between the RV phenotype, haemodynamic state, and metabolite levels. Atlas-based segmentation and co-registration of cardiac magnetic resonance imaging was used to create a quantitative 3D model of RV geometry and function—including maps of regional wall stress. Increasing mean pulmonary artery pressure was associated with hypertrophy of the basal free wall (β = 0.29) and reduced relative wall thickness (β = −0.38), indicative of eccentric remodelling. Wall stress was an independent predictor of all-cause mortality (hazard ratio = 1.27, P = 0.04). Six metabolites were significantly associated with elevated wall stress (β = 0.28–0.34) including increased levels of tRNA-specific modified nucleosides and fatty acid acylcarnitines, and decreased levels (β = −0.40) of sulfated androgen.ConclusionUsing computational image phenotyping, we identify metabolic profiles, reporting on energy metabolism and cellular stress-response, which are associated with adaptive RV mechanisms to PH.
Lavdas I, Glocker B, Rueckert D, et al., 2019, Machine learning in whole-body MRI: experiences and challenges from an applied study using multicentre data, CLINICAL RADIOLOGY, Vol: 74, Pages: 346-356, ISSN: 0009-9260
Howard J, Fisher L, Shun-Shin M, et al., 2019, Cardiac rhythm device identification using neural networks, JACC: Clinical Electrophysiology, Vol: 5, Pages: 576-586, ISSN: 2405-5018
BackgroundMedical staff often need to determine the model of a pacemaker or defibrillator (cardiac rhythm devices) quickly and accurately. Current approaches involve comparing a device’s X-ray appearance with a manual flow chart. We aimed to see whether a neural network could be trained to perform this task more accurately.Methods and ResultsWe extracted X-ray images of 1676 devices, comprising 45 models from 5 manufacturers. We developed a convolutional neural network to classify the images, using a training set of 1451 images. The testing set was a further 225 images, consisting of 5 examples of each model. We compared the network’s ability to identify the manufacturer of a device with those of cardiologists using a published flow-chart.The neural network was 99.6% (95% CI 97.5 to 100) accurate in identifying the manufacturer of a device from an X-ray, and 96.4% (95% CI 93.1 to 98.5) accurate in identifying the model group. Amongst 5 cardiologists using the flow-chart, median manufacturer accuracy was 72.0% (range 62.2% to 88.9%), and model group identification was not possible. The network was significantly superior to all of the cardiologists in identifying the manufacturer (p < 0.0001 against the median human; p < 0.0001 against the best human).ConclusionsA neural network can accurately identify the manufacturer and even model group of a cardiac rhythm device from an X-ray, and exceeds human performance. This system may speed up the diagnosis and treatment of patients with cardiac rhythm devices and it is publicly accessible online.
Tarroni G, Oktay O, Bai W, et al., 2019, Learning-based quality control for cardiac MR images, IEEE Transactions on Medical Imaging, Vol: 38, Pages: 1127-1138, ISSN: 0278-0062
The effectiveness of a cardiovascular magnetic resonance (CMR) scan depends on the ability of the operator to correctly tune the acquisition parameters to the subject being scanned and on the potential occurrence of imaging artefacts such as cardiac and respiratory motion. In the clinical practice, a quality control step is performed by visual assessment of the acquired images: however, this procedure is strongly operatordependent, cumbersome and sometimes incompatible with the time constraints in clinical settings and large-scale studies. We propose a fast, fully-automated, learning-based quality control pipeline for CMR images, specifically for short-axis image stacks. Our pipeline performs three important quality checks: 1) heart coverage estimation, 2) inter-slice motion detection, 3) image contrast estimation in the cardiac region. The pipeline uses a hybrid decision forest method - integrating both regression and structured classification models - to extract landmarks as well as probabilistic segmentation maps from both long- and short-axis images as a basis to perform the quality checks. The technique was tested on up to 3000 cases from the UK Biobank as well as on 100 cases from the UK Digital Heart Project, and validated against manual annotations and visual inspections performed by expert interpreters. The results show the capability of the proposed pipeline to correctly detect incomplete or corrupted scans (e.g. on UK Biobank, sensitivity and specificity respectively 88% and 99% for heart coverage estimation, 85% and 95% for motion detection), allowing their exclusion from the analysed dataset or the triggering of a new acquisition.
Cox DJ, Bai W, Price AN, et al., 2019, Ventricular remodeling in preterm infants: computational cardiac magnetic resonance atlasing shows significant early remodeling of the left ventricle, PEDIATRIC RESEARCH, Vol: 85, Pages: 807-815, ISSN: 0031-3998
Meng Q, Zimmer V, Hou B, et al., 2019, Weakly supervised estimation of shadow confidence maps in fetal ultrasound imaging, IEEE Transactions on Medical Imaging, ISSN: 0278-0062
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Real-time feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions using learning-based algorithms is challenging because pixel-wise ground truth annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions. Our method is able to generate a dense shadow-focused confidence map. In our method, a shadow-seg module is built to learn general shadow features for shadow segmentation, based on global image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is introduced to extend the obtained binary shadow segmentation to a reference confidence map. Additionally, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This network is able to predict shadow confidence maps directly from input images during inference. We use evaluation metrics such as DICE, inter-class correlation and etc. to verify the effectiveness of our method. Our method is more consistent than human annotation, and outperforms the state-of-the-art quantitatively in shadow segmentation and qualitatively in confidence estimation of shadow regions. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.
Schlemper J, Oktay O, Schaap M, et al., 2019, Attention gated networks: Learning to leverage salient regions in medical images, MEDICAL IMAGE ANALYSIS, Vol: 53, Pages: 197-207, ISSN: 1361-8415
Alansary A, Oktay O, Li Y, et al., 2019, Evaluating reinforcement learning agents for anatomical landmark detection, Medical Image Analysis, Vol: 53, Pages: 156-164, ISSN: 1361-8415
Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.