196 results found
Castro DC, Walker I, Glocker B, 2020, Causality matters in medical imaging.
Causal reasoning can shed new light on the major challenges in machine learning for medical imaging: scarcity of high-quality annotated data and mismatch between the development dataset and the target environment. A causal perspective on these issues allows decisions about data collection, annotation, preprocessing, and learning strategies to be made and scrutinized more transparently, while providing a detailed categorisation of potential biases and mitigation techniques. Along with worked clinical examples, we highlight the importance of establishing the causal relationship between images and their annotations, and offer step-by-step recommendations for future studies.
Popescu SG, Whittington A, Gunn RN, et al., 2020, Nonlinear biomarker interactions in conversion from mild cognitive impairment to Alzheimer's disease, HUMAN BRAIN MAPPING, ISSN: 1065-9471
Matzkin F, Newcombe V, Stevenson S, et al., 2020, Self-supervised skull reconstruction in brain CT images with decompressive craniectomy, Publisher: arXiv
Decompressive craniectomy (DC) is a common surgical procedure consisting ofthe removal of a portion of the skull that is performed after incidents such asstroke, traumatic brain injury (TBI) or other events that could result in acutesubdural hemorrhage and/or increasing intracranial pressure. In these cases, CTscans are obtained to diagnose and assess injuries, or guide a certain therapyand intervention. We propose a deep learning based method to reconstruct the skull defectremoved during DC performed after TBI from post-operative CT images. Thisreconstruction is useful in multiple scenarios, e.g. to support the creation ofcranioplasty plates, accurate measurements of bone flap volume and totalintracranial volume, important for studies that aim to relate later atrophy topatient outcome. We propose and compare alternative self-supervised methodswhere an encoder-decoder convolutional neural network (CNN) estimates themissing bone flap on post-operative CTs. The self-supervised learning strategyonly requires images with complete skulls and avoids the need for annotated DCimages. For evaluation, we employ real and simulated images with DC, comparingthe results with other state-of-the-art approaches. The experiments show thatthe proposed model outperforms current manual methods, enabling reconstructioneven in highly challenging cases where big skull defects have been removedduring surgery.
Robinson R, Dou Q, Castro DC, et al., 2020, Image-level harmonization of multi-site data using image-and-spatial transformer networks, 23rd International Conference on Medical Image Computing and Computer Assisted Intervention
We investigate the use of image-and-spatial transformer networks (ISTNs) totackle domain shift in multi-site medical imaging data. Commonly, domainadaptation (DA) is performed with little regard for explainability of theinter-domain transformation and is often conducted at the feature-level in thelatent space. We employ ISTNs for DA at the image-level which constrainstransformations to explainable appearance and shape changes. Asproof-of-concept we demonstrate that ISTNs can be trained adversarially on aclassification problem with simulated 2D data. For real-data validation, weconstruct two 3D brain MRI datasets from the Cam-CAN and UK Biobank studies toinvestigate domain shift due to acquisition and population differences. We showthat age regression and sex classification models trained on ISTN outputimprove generalization when training on data from one and testing on the othersite.
Dou Q, Liu Q, Heng PA, et al., 2020, Unpaired multi-modal segmentation via knowledge distillation, IEEE Transactions on Medical Imaging, Vol: 39, Pages: 2415-2425, ISSN: 0278-0062
Multi-modal learning is typically performed with network architectures containing modality-specific layers and shared layers, utilizing co-registered images of different modalities. We propose a novel learning scheme for unpaired cross-modality image segmentation, with a highly compact architecture achieving superior segmentation accuracy. In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI, and only employ modality-specific internal normalization layers which compute respective statistics. To effectively train such a highly compact model, we introduce a novel loss term inspired by knowledge distillation, by explicitly constraining the KL-divergence of our derived prediction distributions between modalities. We have extensively validated our approach on two multi-class segmentation problems: i) cardiac structure segmentation, and ii) abdominal organ segmentation. Different network settings, i.e., 2D dilated network and 3D U-net, are utilized to investigate our method's general efficacy. Experimental results on both tasks demonstrate that our novel multi-modal learning scheme consistently outperforms single-modal training and previous multi-modal approaches.
Coelho De Castro D, Walker I, Glocker B, 2020, Causality matters in medical imaging, Nature Communications, ISSN: 2041-1723
Causal reasoning can shed new light on the major challenges in ma-chine learning for medical imaging: scarcity of high-quality annotated data and mismatch between the development dataset and the target environment. A causal perspective on these issues allows decisions about data collection, annotation, preprocessing, and learning strategies to be made and scrutinized more transparently, while providing a detailed categorisation of potential biases and mitigation techniques. Along with worked clinical examples, we highlight the importance of establishing the causal relationship between images and their annotations, and offer step-by-step recommendations for future studies.
Larrazabal AJ, Martínez C, Glocker B, et al., 2020, Post-DAE: anatomically plausible segmentation via post-processing with denoising autoencoders, IEEE Transactions on Medical Imaging, ISSN: 0278-0062
We introduce Post-DAE, a post-processing method based on denoisingautoencoders (DAE) to improve the anatomical plausibility of arbitrarybiomedical image segmentation algorithms. Some of the most popular segmentationmethods (e.g. based on convolutional neural networks or random forestclassifiers) incorporate additional post-processing steps to ensure that theresulting masks fulfill expected connectivity constraints. These methodsoperate under the hypothesis that contiguous pixels with similar aspect shouldbelong to the same class. Even if valid in general, this assumption does notconsider more complex priors like topological restrictions or convexity, whichcannot be easily incorporated into these methods. Post-DAE leverages the latestdevelopments in manifold learning via denoising autoencoders. First, we learn acompact and non-linear embedding that represents the space of anatomicallyplausible segmentations. Then, given a segmentation mask obtained with anarbitrary method, we reconstruct its anatomically plausible version byprojecting it onto the learnt manifold. The proposed method is trained usingunpaired segmentation mask, what makes it independent of intensity informationand image modality. We performed experiments in binary and multi-labelsegmentation of chest X-ray and cardiac magnetic resonance images. We show howerroneous and noisy segmentation masks can be improved using Post-DAE. Withalmost no additional computation cost, our method brings erroneoussegmentations back to a feasible space.
Folgoc LL, Baltatzis V, Alansary A, et al., 2020, Bayesian sampling bias correction: training with the right loss function, Publisher: arXiv
We derive a family of loss functions to train models in the presence ofsampling bias. Examples are when the prevalence of a pathology differs from itssampling rate in the training dataset, or when a machine learning practionerrebalances their training dataset. Sampling bias causes large discrepanciesbetween model performance in the lab and in more realistic settings. It isomnipresent in medical imaging applications, yet is often overlooked attraining time or addressed on an ad-hoc basis. Our approach is based onBayesian risk minimization. For arbitrary likelihood models we derive theassociated bias corrected loss for training, exhibiting a direct connection toinformation gain. The approach integrates seamlessly in the current paradigm of(deep) learning using stochastic backpropagation and naturally with Bayesianmodels. We illustrate the methodology on case studies of lung nodule malignancygrading.
Pawlowski N, Castro DC, Glocker B, 2020, Deep structural causal models for tractable counterfactual inference, Publisher: arXiv
We formulate a general framework for building structural causal models (SCMs)with deep learning components. The proposed approach employs normalising flowsand variational inference to enable tractable inference of exogenous noisevariables - a crucial step for counterfactual inference that is missing fromexisting deep causal learning methods. Our framework is validated on asynthetic dataset built on MNIST as well as on a real-world medical dataset ofbrain MRI scans. Our experimental results indicate that we can successfullytrain deep SCMs that are capable of all three levels of Pearl's ladder ofcausation: association, intervention, and counterfactuals, giving rise to apowerful new approach for answering causal questions in imaging applicationsand beyond. The code for all our experiments is available athttps://github.com/biomedia-mira/deepscm.
Monteiro M, Folgoc LL, Castro DCD, et al., 2020, Stochastic segmentation networks: modelling spatially correlated aleatoric uncertainty, Publisher: arXiv
In image segmentation, there is often more than one plausible solution for agiven input. In medical imaging, for example, experts will often disagree aboutthe exact location of object boundaries. Estimating this inherent uncertaintyand predicting multiple plausible hypotheses is of great interest in manyapplications, yet this ability is lacking in most current deep learningmethods. In this paper, we introduce stochastic segmentation networks (SSNs),an efficient probabilistic method for modelling aleatoric uncertainty with anyimage segmentation network architecture. In contrast to approaches that producepixel-wise estimates, SSNs model joint distributions over entire label maps andthus can generate multiple spatially coherent hypotheses for a single image. Byusing a low-rank multivariate normal distribution over the logit space to modelthe probability of the label map given the image, we obtain a spatiallyconsistent probability distribution that can be efficiently computed by aneural network without any changes to the underlying architecture. We testedour method on the segmentation of real-world medical data, including lungnodules in 2D CT and brain tumours in 3D multimodal MRI scans. SSNs outperformstate-of-the-art for modelling correlated uncertainty in ambiguous images whilebeing much simpler, more flexible, and more efficient.
Mathieu F, Güting H, Gravesteijn B, et al., 2020, Impact of Antithrombotic Agents on Radiological Lesion Progression in Acute Traumatic Brain Injury: A CENTER-TBI Propensity-Matched Cohort Analysis., J Neurotrauma
An increasing number of elderly patients are being affected by traumatic brain injury (TBI) and a significant proportion are on pre-hospital antithrombotic therapy for cardio- or cerebrovascular indications. We have quantified the impact of antiplatelet/anticoagulant (APAC) agents on radiological lesion progression in acute TBI, using a novel, semi-automated approach to volumetric lesion measurement, and explored the impact of use on clinical outcomes in the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) study. We used a 1:1 propensity-matched cohort design, matching controls to APAC users based on demographics, baseline clinical status, pre-injury comorbidities, and injury severity. Subjects were selected from a pool of patients enrolled in CENTER-TBI with computed tomography (CT) scan at admission and repeated within 7 days of injury. We calculated absolute changes in volume of intraparenchymal, extra-axial, intraventricular, and total intracranial hemorrhage (ICH) between scans, and compared volume of hemorrhagic progression, proportion of patients with significant degree of progression (>25% of initial volume), proportion with new ICH on follow-up CT, as well as clinical course and outcomes. A total of 316 patients were included (158 APAC users; 158 controls). The mean volume of progression was significantly higher in the APAC group for extra-axial (3.1 vs. 1.3 mL, p = 0.01), but not intraparenchymal (3.8 vs. 4.6 mL, p = 0.65), intraventricular (0.2 vs. 0.0 mL, p = 0.79), or total intracranial hemorrhage (ICH; 7.0 vs. 6.0 mL, p = 0.08). More patients had significant hemorrhage growth (54.1 vs. 37.0%, p = 0.003) and delayed ICH (4 of 18 vs. none; p = 0.04) in the APAC group compared with controls, but this was not associated with differences in length of stay (LOS), rates of neurosurgical intervention
Wang S, Tarroni G, Qin C, et al., 2020, Deep generative model-based quality control for cardiac MRI segmentation, International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI)
In recent years, convolutional neural networks have demonstrated promisingperformance in a variety of medical image segmentation tasks. However, when atrained segmentation model is deployed into the real clinical world, the modelmay not perform optimally. A major challenge is the potential poor-qualitysegmentations generated due to degraded image quality or domain shift issues.There is a timely need to develop an automated quality control method that candetect poor segmentations and feedback to clinicians. Here we propose a noveldeep generative model-based framework for quality control of cardiac MRIsegmentation. It first learns a manifold of good-quality image-segmentationpairs using a generative model. The quality of a given test segmentation isthen assessed by evaluating the difference from its projection onto thegood-quality manifold. In particular, the projection is refined throughiterative search in the latent space. The proposed method achieves highprediction accuracy on two publicly available cardiac MRI datasets. Moreover,it shows better generalisation ability than traditional regression-basedmethods. Our approach provides a real-time and model-agnostic quality controlfor cardiac MRI segmentation, which has the potential to be integrated intoclinical image analysis workflows.
Monteiro M, Newcombe VFJ, Mathieu F, et al., 2020, Multi-class semantic segmentation and quantification of traumatic brain injury lesions on head CT using deep learning – an algorithm development and multi-centre validation study, The Lancet. Digital Health, Vol: 2, Pages: e314-e322, ISSN: 2589-7500
Background CT is the most common imaging modality in traumatic brain injury (TBI). However, its conventional userequires expert clinical interpretation and does not provide detailed quantitative outputs, which may have prognosticimportance. We aimed to use deep learning to reliably and efficiently quantify and detect different lesion types.Methods Patients were recruited between Dec 9, 2014, and Dec 17, 2017, in 60 centres across Europe. We trained andvalidated an initial convolutional neural network (CNN) on expert manual segmentations (dataset 1). This CNN wasused to automatically segment a new dataset of scans, which we then corrected manually (dataset 2). From thisdataset, we used a subset of scans to train a final CNN for multiclass, voxel-wise segmentation of lesion types. Theperformance of this CNN was evaluated on a test subset. Performance was measured for lesion volume quantification,lesion progression, and lesion detection and lesion volume classification. For lesion detection, external validation wasdone on an independent set of 500 patients from India.Findings 98 scans from one centre were included in dataset 1. Dataset 2 comprised 839 scans from 38 centres:184 scans were used in the training subset and 655 in the test subset. Compared with manual reference, CNN-derivedlesion volumes showed a mean difference of 0·86 mL (95% CI –5·23 to 6·94) for intraparenchymal haemorrhage,1·83 mL (–12·01 to 15·66) for extra-axial haemorrhage, 2·09 mL (–9·38 to 13·56) for perilesional oedema, and0·07 mL (–1·00 to 1·13) for intraventricular haemorrhage.Interpretation We show the ability of a CNN to separately segment, quantify, and detect multiclass haemorrhagiclesions and perilesional oedema. These volumetric lesion estimates allow clinically relevant quantification oflesion burden and progression, with potential applications for personalised treatment strategies
van Wijk RPJ, van Dijck JTJM, Timmers M, et al., 2020, Informed consent procedures in patients with an acute inability to provide informed consent: Policy and practice in the CENTER-TBI study., J Crit Care, Vol: 59, Pages: 6-15
PURPOSE: Enrolling traumatic brain injury (TBI) patients with an inability to provide informed consent in research is challenging. Alternatives to patient consent are not sufficiently embedded in European and national legislation, which allows procedural variation and bias. We aimed to quantify variations in informed consent policy and practice. METHODS: Variation was explored in the CENTER-TBI study. Policies were reported by using a questionnaire and national legislation. Data on used informed consent procedures were available for 4498 patients from 57 centres across 17 European countries. RESULTS: Variation in the use of informed consent procedures was found between and within EU member states. Proxy informed consent (N = 1377;64%) was the most frequently used type of consent in the ICU, followed by patient informed consent (N = 426;20%) and deferred consent (N = 334;16%). Deferred consent was only actively used in 15 centres (26%), although it was considered valid in 47 centres (82%). CONCLUSIONS: Alternatives to patient consent are essential for TBI research. While there seems to be concordance amongst national legislations, there is regional variability in institutional practices with respect to the use of different informed consent procedures. Variation could be caused by several reasons, including inconsistencies in clear legislation or knowledge of such legislation amongst researchers.
Monteiro M, Kamnitsas K, Ferrante E, et al., 2019, TBI lesion segmentation in head CT: impact of preprocessing and data augmentation, MICCAI Brain Lesion Workshop, Publisher: Springer Verlag, ISSN: 0302-9743
Automatic segmentation of lesions in head CT provides keyinformation for patient management, prognosis and disease monitoring.Despite its clinical importance, method development has mostly focusedon multi-parametric MRI. Analysis of the brain in CT is challengingdue to limited soft tissue contrast and its mono-modal nature. We studythe under-explored problem of fine-grained CT segmentation of multiplelesion types (core, blood, oedema) in traumatic brain injury (TBI). Weobserve that preprocessing and data augmentation choices greatly impactthe segmentation accuracy of a neural network, yet these factors arerarely thoroughly assessed in prior work. We design an empirical studythat extensively evaluates the impact of different data preprocessing andaugmentation methods. We show that these choices can have an impactof up to 18% DSC. We conclude that resampling to isotropic resolutionyields improved performance, skull-stripping can be replaced by using theright intensity window, and affine-to-atlas registration is not necessaryif we use sufficient spatial augmentation. Since both skull-stripping andaffine-to-atlas registration are susceptible to failure, we recommend theiralternatives to be used in practice. We believe this is the first work toreport results for fine-grained multi-class segmentation of TBI in CT. Ourfindings may inform further research in this under-explored yet clinicallyimportant task of automatic head CT lesion segmentation.
Serruys PW, Chichareon P, Modolo R, et al., 2020, The SYNTAX score on its way out or ... towards artificial intelligence: part, EUROINTERVENTION, Vol: 16, Pages: 60-75, ISSN: 1774-024X
Serruys PW, Chichareon P, Modolo R, et al., 2020, The SYNTAX score on its way out or ... towards artificial intelligence: part I, EUROINTERVENTION, Vol: 16, Pages: 44-59, ISSN: 1774-024X
Zeiler FA, Mathieu F, Monteiro M, et al., 2020, Diffuse Intracranial Injury Patterns Are Associated with Impaired Cerebrovascular Reactivity in Adult Traumatic Brain Injury: A CENTER-TBI Validation Study, JOURNAL OF NEUROTRAUMA, Vol: 37, Pages: 1597-1608, ISSN: 0897-7151
Tarroni G, Bai W, Oktay O, et al., 2020, Large-scale quality control of cardiac imaging in population studies: application to UK Biobank, Scientific Reports, Vol: 10, ISSN: 2045-2322
In large population studies such as the UK Biobank (UKBB), quality control of the acquired images by visual assessment isunfeasible. In this paper, we apply a recently developed fully-automated quality control pipeline for cardiac MR (CMR) imagesto the first 19,265 short-axis (SA) cine stacks from the UKBB. We present the results for the three estimated quality metrics(heart coverage, inter-slice motion and image contrast in the cardiac region) as well as their potential associations with factorsincluding acquisition details and subject-related phenotypes. Up to 14.2% of the analysed SA stacks had sub-optimal coverage(i.e. missing basal and/or apical slices), however most of them were limited to the first year of acquisition. Up to 16% of thestacks were affected by noticeable inter-slice motion (i.e. average inter-slice misalignment greater than 3.4 mm). Inter-slicemotion was positively correlated with weight and body surface area. Only 2.1% of the stacks had an average end-diastoliccardiac image contrast below 30% of the dynamic range. These findings will be highly valuable for both the scientists involvedin UKBB CMR acquisition and for the ones who use the dataset for research purposes.
Mathieu F, Zeiler FA, Ercole A, et al., 2020, Relationship between measures of cerebrovascular reactivity and intracranial lesion progression in acute TBI patients: a CENTER-TBI study, Journal of Neurotrauma, ISSN: 0897-7151
Jimenez-Pastor A, Alberich-Bayarri A, Fos-Guarinos B, et al., 2020, Automated vertebrae localization and identification by decision forests and image-based refinement on real-world CT data, RADIOLOGIA MEDICA, Vol: 125, Pages: 48-56, ISSN: 0033-8362
Dou Q, Coelho De Castro D, Kamnitsas K, et al., 2019, Domain generalization via model-agnostic learning of semantic features, Neural Information Processing Systems (NeurIPS), Publisher: Neural Information Processing Systems Foundation, Inc., ISSN: 1049-5258
Generalization capability to unseen domains is crucial for machine learning modelswhen deploying to real-world conditions. We investigate the challenging problemof domain generalization, i.e., training a model on multi-domain source data suchthat it can directly generalize to target domains with unknown statistics. We adopta model-agnostic learning paradigm with gradient-based meta-train and meta-testprocedures to expose the optimization to domain shift. Further, we introducetwo complementary losses which explicitly regularize the semantic structure ofthe feature space. Globally, we align a derived soft confusion matrix to preservegeneral knowledge about inter-class relationships. Locally, we promote domain-independent class-specific cohesion and separation of sample features with ametric-learning component. The effectiveness of our method is demonstrated withnew state-of-the-art results on two common object recognition benchmarks. Ourmethod also shows consistent improvement on a medical image segmentation task.
Lee M, Petersen K, Pawlowski N, et al., 2019, TeTrIS: template transformer networks for image segmentation with shape priors, IEEE Transactions on Medical Imaging, Vol: 38, Pages: 2596-2606, ISSN: 0278-0062
In this paper we introduce and compare different approaches for incorporating shape prior information into neural network based image segmentation. Specifically, we introduce the concept of template transformer networks where a shape template is deformed to match the underlying structure of interest through an end-to-end trained spatial transformer network. This has the advantage of explicitly enforcing shape priors and is free of discretisation artefacts by providing a soft partial volume segmentation. We also introduce a simple yet effective way of incorporating priors in state-of-the-art pixel-wise binary classification methods such as fully convolutional networks and U-net. Here, the template shape is given as an additional input channel, incorporating this information significantly reduces false positives. We report results on synthetic data and sub-voxel segmentation of coronary lumen structures in cardiac computed tomography showing the benefit of incorporating priors in neural network based image segmentation.
Lee M, Oktay O, Schuh A, et al., 2019, Image-and-spatial transformer networks for structure-guided image registration, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, ISSN: 0302-9743
mage registration with deep neural networks has become anactive field of research and exciting avenue for a long standing problem inmedical imaging. The goal is to learn a complex function that maps theappearance of input image pairs to parameters of a spatial transforma-tion in order to align corresponding anatomical structures. We argue andshow that the current direct, non-iterative approaches are sub-optimal,in particular if we seek accurate alignment of Structures-of-Interest (SoI).Information about SoI is often available at training time, for example,in form of segmentations or landmarks. We introduce a novel, genericframework, Image-and-Spatial Transformer Networks (ISTNs), to lever-age SoI information allowing us to learn new image representations thatare optimised for the downstream registration task. Thanks to these rep-resentations we can employ a test-specific, iterative refinement over thetransformation parameters which yields highly accurate registration evenwith very limited training data. Performance is demonstrated on pairwise3D brain registration and illustrative synthetic data.
Li Z, Kamnitsas K, Glocker B, 2019, Overfitting of neural nets under class imbalance: Analysis and improvements for segmentation, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, ISSN: 0302-9743
Overfitting in deep learning has been the focus of a num-ber of recent works, yet its exact impact on the behaviour of neuralnetworks is not well understood. This study analyzes overfitting by ex-amining how the distribution of logits alters in relation to how muchthe model overfits. Specifically, we find that when training with few datasamples, the distribution of logit activations when processing unseen testsamples of an under-represented class tends to shift towards and evenacross the decision boundary, while the over-represented class seems un-affected. In image segmentation, foreground samples are often heavilyunder-represented. We observe that sensitivity of the model drops asa result of overfitting, while precision remains mostly stable. Based onour analysis, we derive asymmetric modifications of existing loss func-tions and regularizers including a large margin loss, focal loss, adver-sarial training and mixup, which specifically aim at reducing the shiftobserved when embedding unseen samples of the under-represented class.We study the case of binary segmentation of brain tumor core and showthat our proposed simple modifications lead to significantly improvedsegmentation performance over the symmetric variants.
Castro DC, Tan J, Kainz B, et al., 2019, Morpho-MNIST: quantitative assessment and diagnostics for representation learning, Journal of Machine Learning Research, Vol: 20, Pages: 1-29, ISSN: 1532-4435
Revealing latent structure in data is an active field of research, havingintroduced exciting technologies such as variational autoencoders andadversarial networks, and is essential to push machine learning towardsunsupervised knowledge discovery. However, a major challenge is the lack ofsuitable benchmarks for an objective and quantitative evaluation of learnedrepresentations. To address this issue we introduce Morpho-MNIST, a frameworkthat aims to answer: "to what extent has my model learned to represent specificfactors of variation in the data?" We extend the popular MNIST dataset byadding a morphometric analysis enabling quantitative comparison of trainedmodels, identification of the roles of latent variables, and characterisationof sample diversity. We further propose a set of quantifiable perturbations toassess the performance of unsupervised and supervised methods on challengingtasks such as outlier detection and domain adaptation. Data and code areavailable at https://github.com/dccastro/Morpho-MNIST.
Glocker B, Robinson R, Castro DC, et al., 2019, Machine learning with multi-site imaging data: an empirical study on theimpact of scanner effects, Medical Imaging meets NeurIPS
This is an empirical study to investigate the impact of scanner effects when us-ing machine learning on multi-site neuroimaging data. We utilize structural T1-weighted brain MRI obtained from two different studies, Cam-CAN and UKBiobank. For the purpose of our investigation, we construct a dataset consisting ofbrain scans from 592 age- and sex-matched individuals, 296 subjects from eachoriginal study. Our results demonstrate that even after careful pre-processing withstate-of-the-art neuroimaging pipelines a classifier can easily distinguish betweenthe origin of the data with very high accuracy. Our analysis on the example appli-cation of sex classification suggests that current approaches to harmonize data areunable to remove scanner-specific bias leading to overly optimistic performanceestimates and poor generalization. We conclude that multi-site data harmonizationremains an open challenge and particular care needs to be taken when using suchdata with advanced machine learning methods for predictive modelling.
Steyerberg EW, Wiegers E, Sewalt C, et al., 2019, Case-mix, care pathways, and outcomes in patients with traumatic brain injury in CENTER-TBI: a European prospective, multicentre, longitudinal, cohort study., Lancet Neurol, Vol: 18, Pages: 923-934
BACKGROUND: The burden of traumatic brain injury (TBI) poses a large public health and societal problem, but the characteristics of patients and their care pathways in Europe are poorly understood. We aimed to characterise patient case-mix, care pathways, and outcomes of TBI. METHODS: CENTER-TBI is a Europe-based, observational cohort study, consisting of a core study and a registry. Inclusion criteria for the core study were a clinical diagnosis of TBI, presentation fewer than 24 h after injury, and an indication for CT. Patients were differentiated by care pathway and assigned to the emergency room (ER) stratum (patients who were discharged from an emergency room), admission stratum (patients who were admitted to a hospital ward), or intensive care unit (ICU) stratum (patients who were admitted to the ICU). Neuroimages and biospecimens were stored in repositories and outcome was assessed at 6 months after injury. We used the IMPACT core model for estimating the expected mortality and proportion with unfavourable Glasgow Outcome Scale Extended (GOSE) outcomes in patients with moderate or severe TBI (Glasgow Coma Scale [GCS] score ≤12). The core study was registered with ClinicalTrials.gov, number NCT02210221, and with Resource Identification Portal (RRID: SCR_015582). FINDINGS: Data from 4509 patients from 18 countries, collected between Dec 9, 2014, and Dec 17, 2017, were analysed in the core study and from 22 782 patients in the registry. In the core study, 848 (19%) patients were in the ER stratum, 1523 (34%) in the admission stratum, and 2138 (47%) in the ICU stratum. In the ICU stratum, 720 (36%) patients had mild TBI (GCS score 13-15). Compared with the core cohort, the registry had a higher proportion of patients in the ER (9839 [43%]) and admission (8571 [38%]) strata, with more than 95% of patients classified as having mild TBI. Patients in the core study were older than those in previous studies (median age 50 years [IQR 30-66], 1254 [28%] aged >65
McCouat J, Glocker B, 2019, Vertebrae detection and localization in CT with two-stage CNNs and dense annotations, Computational Methods and Clinical Applications in Musculoskeletal Imaging (MSKI), Publisher: Springer Verlag, ISSN: 0302-9743
We propose a new, two-stage approach to the vertebrae cen-troid detection and localization problem. The first stage detects wherethe vertebrae appear in the scan using 3D samples, the second identifiesthe specific vertebrae within that region-of-interest using 2D slices. Oursolution utilizes new techniques to improve the accuracy of the algorithmsuch as a revised approach to dense labelling from sparse centroid anno-tations and usage of large anisotropic kernels in the base level of a U-netarchitecture to maximize the receptive field. Our method improves thestate-of-the-art’s mean localization accuracy by 0.87mm on a publiclyavailable spine CT benchmark.
Pawlowski N, Bhooshan S, Ballas N, et al., 2019, Needles in haystacks: On classifying tiny objects in large images, Publisher: arXiv
In some computer vision domains, such as medical or hyperspectral imaging, wecare about the classification of tiny objects in large images. However, mostConvolutional Neural Networks (CNNs) for image classification were developedand analyzed using biased datasets that contain large objects, most often, incentral image positions. To assess whether classical CNN architectures workwell for tiny object classification we build a comprehensive testbed containingtwo datasets: one derived from MNIST digits and other from histopathologyimages. This testbed allows us to perform controlled experiments to stress-testCNN architectures using a broad spectrum of signal-to-noise ratios. Ourobservations suggest that: (1) There exists a limit to signal-to-noise belowwhich CNNs fail to generalize and that this limit is affected by dataset size -more data leading to better performances; however, the amount of training datarequired for the model to generalize scales rapidly with the inverse of theobject-to-image ratio (2) in general, higher capacity models exhibit bettergeneralization; (3) when knowing the approximate object sizes, adaptingreceptive field is beneficial; and (4) for very small signal-to-noise ratio thechoice of global pooling operation affects optimization, whereas for relativelylarge signal-to-noise values, all tested global pooling operations exhibitsimilar performance.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.