349 results found
Jones C, Castro DC, De Sousa Ribeiro F, et al., 2024, A causal perspective on dataset bias in machine learning for medical imaging, Nature Machine Intelligence
Maier-Hein L, Reinke A, Godau P, et al., 2024, Metrics reloaded: recommendations for image analysis validation, Nature Methods, Vol: 21, Pages: 195-212, ISSN: 1548-7091
Reinke A, Tizabi MD, Baumgartner M, et al., 2024, Understanding metric-related pitfalls in image analysis validation, Nature Methods, Vol: 21, Pages: 182-194, ISSN: 1548-7091
Kori A, Locatello F, De Sousa Ribeiro F, et al., 2024, Grounded Object-Centric Learning, International Conference on Learning Representations (ICLR)
Santhirasekaram A, Winkler M, Rockall A, et al., 2024, Hierarchical Compositionality in Hyperbolic Space for Robust Medical Image Segmentation, Pages: 52-62, ISSN: 0302-9743
Deep learning based medical image segmentation models need to be robust to domain shifts and image distortion for the safe translation of these models into clinical practice. The most popular methods for improving robustness are centred around data augmentation and adversarial training. Many image segmentation tasks exhibit regular structures with only limited variability. We aim to exploit this notion by learning a set of base components in the latent space whose composition can account for the entire structural variability of a specific segmentation task. We enforce a hierarchical prior in the composition of the base components and consider the natural geometry in which to build our hierarchy. Specifically, we embed the base components on a hyperbolic manifold which we claim leads to a more natural composition. We demonstrate that our method improves model robustness under various perturbations and in the task of single domain generalisation.
Åkerlund CAI, Holst A, Bhattacharyay S, et al., 2024, Clinical descriptors of disease trajectories in patients with traumatic brain injury in the intensive care unit (CENTER-TBI): a multicentre observational cohort study., Lancet Neurol, Vol: 23, Pages: 71-80
BACKGROUND: Patients with traumatic brain injury are a heterogeneous population, and the most severely injured individuals are often treated in an intensive care unit (ICU). The primary injury at impact, and the harmful secondary events that can occur during the first week of the ICU stay, will affect outcome in this vulnerable group of patients. We aimed to identify clinical variables that might distinguish disease trajectories among patients with traumatic brain injury admitted to the ICU. METHODS: We used data from the Collaborative European NeuroTrauma Effectiveness Research in Traumatic Brain Injury (CENTER-TBI) prospective observational cohort study. We included patients aged 18 years or older with traumatic brain injury who were admitted to the ICU at one of the 65 CENTER-TBI participating centres, which range from large academic hospitals to small rural hospitals. For every patient, we obtained pre-injury data and injury features, clinical characteristics on admission, demographics, physiological parameters, laboratory features, brain biomarkers (ubiquitin carboxy-terminal hydrolase L1 [UCH-L1], S100 calcium-binding protein B [S100B], tau, neurofilament light [NFL], glial fibrillary acidic protein [GFAP], and neuron-specific enolase [NSE]), and information about intracranial pressure lowering treatments during the first 7 days of ICU stay. To identify clinical variables that might distinguish disease trajectories, we applied a novel clustering method to these data, which was based on a mixture of probabilistic graph models with a Markov chain extension. The relation of clusters to the extended Glasgow Outcome Scale (GOS-E) was investigated. FINDINGS: Between Dec 19, 2014, and Dec 17, 2017, 4509 patients with traumatic brain injury were recruited into the CENTER-TBI core dataset, of whom 1728 were eligible for this analysis. Glucose variation (defined as the difference between daily maximum and minimum glucose concentrations) and brain biomarkers (S100B, NSE
Ribeiro FDS, Glocker B, 2024, Demystifying Variational Diffusion Models., CoRR, Vol: abs/2401.06281
Rockall AG, Li X, Johnson N, et al., 2023, Development and evaluation of machine learning in whole-body magnetic resonance imaging for detecting metastases in patients with lung or colon cancer: a diagnostic test accuracy study, Investigative Radiology, Vol: 58, Pages: 823-831, ISSN: 0020-9996
OBJECTIVES: Whole-body magnetic resonance imaging (WB-MRI) has been demonstrated to be efficient and cost-effective for cancer staging. The study aim was to develop a machine learning (ML) algorithm to improve radiologists' sensitivity and specificity for metastasis detection and reduce reading times. MATERIALS AND METHODS: A retrospective analysis of 438 prospectively collected WB-MRI scans from multicenter Streamline studies (February 2013-September 2016) was undertaken. Disease sites were manually labeled using Streamline reference standard. Whole-body MRI scans were randomly allocated to training and testing sets. A model for malignant lesion detection was developed based on convolutional neural networks and a 2-stage training strategy. The final algorithm generated lesion probability heat maps. Using a concurrent reader paradigm, 25 radiologists (18 experienced, 7 inexperienced in WB-/MRI) were randomly allocated WB-MRI scans with or without ML support to detect malignant lesions over 2 or 3 reading rounds. Reads were undertaken in the setting of a diagnostic radiology reading room between November 2019 and March 2020. Reading times were recorded by a scribe. Prespecified analysis included sensitivity, specificity, interobserver agreement, and reading time of radiology readers to detect metastases with or without ML support. Reader performance for detection of the primary tumor was also evaluated. RESULTS: Four hundred thirty-three evaluable WB-MRI scans were allocated to algorithm training (245) or radiology testing (50 patients with metastases, from primary 117 colon [n = 117] or lung [n = 71] cancer). Among a total 562 reads by experienced radiologists over 2 reading rounds, per-patient specificity was 86.2% (ML) and 87.7% (non-ML) (-1.5% difference; 95% confidence interval [CI], -6.4%, 3.5%; P = 0.39). Sensitivity was 66.0% (ML) and 70.0% (non-ML) (-4.0% difference; 95% CI, -13.5%, 5.5%; P = 0.344). Among 161 reads by inexperienced readers, per-patient spec
Ng AY, Oberije CJG, Ambrózay É, et al., 2023, Prospective implementation of AI-assisted screen reading to improve early detection of breast cancer., Nat Med, Vol: 29, Pages: 3044-3049
Artificial intelligence (AI) has the potential to improve breast cancer screening; however, prospective evidence of the safe implementation of AI into real clinical practice is limited. A commercially available AI system was implemented as an additional reader to standard double reading to flag cases for further arbitration review among screened women. Performance was assessed prospectively in three phases: a single-center pilot rollout, a wider multicenter pilot rollout and a full live rollout. The results showed that, compared to double reading, implementing the AI-assisted additional-reader process could achieve 0.7-1.6 additional cancer detection per 1,000 cases, with 0.16-0.30% additional recalls, 0-0.23% unnecessary recalls and a 0.1-1.9% increase in positive predictive value (PPV) after 7-11% additional human reads of AI-flagged cases (equating to 4-6% additional overall reading workload). The majority of cancerous cases detected by the AI-assisted additional-reader process were invasive (83.3%) and small-sized (≤10 mm, 47.0%). This evaluation suggests that using AI as an additional reader can improve the early detection of breast cancer with relevant prognostic features, with minimal to no unnecessary recalls. Although the AI-assisted additional-reader workflow requires additional reads, the higher PPV suggests that it can increase screening effectiveness.
Glocker B, Jones C, Bernhardt M, et al., 2023, Risk of bias in chest radiography deep learning foundation models, Radiology: Artificial Intelligence, Vol: 5, ISSN: 2638-6100
PurposeTo analyze a recently published chest radiography foundation model for the presence of biases that could lead to subgroup performance disparities across biologic sex and race.Materials and MethodsThis Health Insurance Portability and Accountability Act–compliant retrospective study used 127 118 chest radiographs from 42 884 patients (mean age, 63 years ± 17 [SD]; 23 623 male, 19 261 female) from the CheXpert dataset that were collected between October 2002 and July 2017. To determine the presence of bias in features generated by a chest radiography foundation model and baseline deep learning model, dimensionality reduction methods together with two-sample Kolmogorov–Smirnov tests were used to detect distribution shifts across sex and race. A comprehensive disease detection performance analysis was then performed to associate any biases in the features to specific disparities in classification performance across patient subgroups.ResultsTen of 12 pairwise comparisons across biologic sex and race showed statistically significant differences in the studied foundation model, compared with four significant tests in the baseline model. Significant differences were found between male and female (P < .001) and Asian and Black (P < .001) patients in the feature projections that primarily capture disease. Compared with average model performance across all subgroups, classification performance on the “no finding” label decreased between 6.8% and 7.8% for female patients, and performance in detecting “pleural effusion” decreased between 10.7% and 11.6% for Black patients.ConclusionThe studied chest radiography foundation model demonstrated racial and sex-related bias, which led to disparate performance across patient subgroups; thus, this model may be unsafe for clinical applications.
Li Z, Kamnitsas K, Dou Q, et al., 2023, Joint Optimization of Class-Specific Training- and Test-Time Data Augmentation in Segmentation., IEEE Trans. Medical Imaging, Vol: 42, Pages: 3323-3335
Li Z, Kamnitsas K, Dou Q, et al., 2023, Joint optimization of class-specific training- and test-time data augmentation in segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 3323-3335, ISSN: 0278-0062
This paper presents an effective and general data augmentation framework for medical image segmentation. We adopt a computationally efficient and data-efficient gradient-based meta-learning scheme to explicitly align the distribution of training and validation data which is used as a proxy for unseen test data. We improve the current data augmentation strategies with two core designs. First, we learn class-specific training-time data augmentation (TRA) effectively increasing the heterogeneity within the training subsets and tackling the class imbalance common in segmentation. Second, we jointly optimize TRA and test-time data augmentation (TEA), which are closely connected as both aim to align the training and test data distribution but were so far considered separately in previous works. We demonstrate the effectiveness of our method on four medical image segmentation tasks across different scenarios with two state-of-the-art segmentation models, DeepMedic and nnU-Net. Extensive experimentation shows that the proposed data augmentation framework can significantly and consistently improve the segmentation performance when compared to existing solutions. Code is publicly available at https://github.com/ZerojumpLine/JCSAugment.
Roschewitz M, Khara G, Yearsley J, et al., 2023, Automatic correction of performance drift under acquisition shift in medical image classification., Nat Commun, Vol: 14
Image-based prediction models for disease detection are sensitive to changes in data acquisition such as the replacement of scanner hardware or updates to the image processing software. The resulting differences in image characteristics may lead to drifts in clinically relevant performance metrics which could cause harm in clinical decision making, even for models that generalise in terms of area under the receiver-operating characteristic curve. We propose Unsupervised Prediction Alignment, a generic automatic recalibration method that requires no ground truth annotations and only limited amounts of unlabelled example images from the shifted data distribution. We illustrate the effectiveness of the proposed method to detect and correct performance drift in mammography-based breast cancer screening and on publicly available histopathology data. We show that the proposed method can preserve the expected performance in terms of sensitivity/specificity under various realistic scenarios of image acquisition shift, thus offering an important safeguard for clinical deployment.
Islam M, Seenivasan L, Sharan SP, et al., 2023, Paced-curriculum distillation with prediction and label uncertainty for image segmentation., Int. J. Comput. Assist. Radiol. Surg., Vol: 18, Pages: 1875-1883
Islam M, Seenivasan L, Sharan SP, et al., 2023, Paced-curriculum distillation with prediction and label uncertainty for image segmentation, International Journal of Computer Assisted Radiology and Surgery, Vol: 18, Pages: 1875-1883, ISSN: 1861-6410
PURPOSE: In curriculum learning, the idea is to train on easier samples first and gradually increase the difficulty, while in self-paced learning, a pacing function defines the speed to adapt the training progress. While both methods heavily rely on the ability to score the difficulty of data samples, an optimal scoring function is still under exploration. METHODOLOGY: Distillation is a knowledge transfer approach where a teacher network guides a student network by feeding a sequence of random samples. We argue that guiding student networks with an efficient curriculum strategy can improve model generalization and robustness. For this purpose, we design an uncertainty-based paced curriculum learning in self-distillation for medical image segmentation. We fuse the prediction uncertainty and annotation boundary uncertainty to develop a novel paced-curriculum distillation (P-CD). We utilize the teacher model to obtain prediction uncertainty and spatially varying label smoothing with Gaussian kernel to generate segmentation boundary uncertainty from the annotation. We also investigate the robustness of our method by applying various types and severity of image perturbation and corruption. RESULTS: The proposed technique is validated on two medical datasets of breast ultrasound image segmentation and robot-assisted surgical scene segmentation and achieved significantly better performance in terms of segmentation and robustness. CONCLUSION: P-CD improves the performance and obtains better generalization and robustness over the dataset shift. While curriculum learning requires extensive tuning of hyper-parameters for pacing function, the level of performance improvement suppresses this limitation.
Ribeiro FDS, Xia T, Monteiro M, et al., 2023, High fidelity image counterfactuals with probabilistic causal models, ICML 2023, Publisher: ML Research Press, Pages: 7390-7425
We present a general causal generative modelling framework for accurateestimation of high fidelity image counterfactuals with deep structural causalmodels. Estimation of interventional and counterfactual queries forhigh-dimensional structured variables, such as images, remains a challengingtask. We leverage ideas from causal mediation analysis and advances ingenerative modelling to design new deep causal mechanisms for structuredvariables in causal models. Our experiments demonstrate that our proposedmechanisms are capable of accurate abduction and estimation of direct, indirectand total effects as measured by axiomatic soundness of counterfactuals.
Pinto MS, Winzeck S, Kornaropoulos EN, et al., 2023, Use of Support Vector Machines Approach via ComBat Harmonized Diffusion Tensor Imaging for the Diagnosis and Prognosis of Mild Traumatic Brain Injury: A CENTER-TBI Study, JOURNAL OF NEUROTRAUMA, Vol: 40, Pages: 1317-1338, ISSN: 0897-7151
Li Z, Kamnitsas K, Ouyang C, et al., 2023, Context label learning: improving background class representations in semantic segmentation, IEEE Transactions on Medical Imaging, Vol: 42, Pages: 1885-1896, ISSN: 0278-0062
Background samples provide key contextual information for segmenting regionsof interest (ROIs). However, they always cover a diverse set of structures,causing difficulties for the segmentation model to learn good decisionboundaries with high sensitivity and precision. The issue concerns the highlyheterogeneous nature of the background class, resulting in multi-modaldistributions. Empirically, we find that neural networks trained withheterogeneous background struggle to map the corresponding contextual samplesto compact clusters in feature space. As a result, the distribution overbackground logit activations may shift across the decision boundary, leading tosystematic over-segmentation across different datasets and tasks. In thisstudy, we propose context label learning (CoLab) to improve the contextrepresentations by decomposing the background class into several subclasses.Specifically, we train an auxiliary network as a task generator, along with theprimary segmentation model, to automatically generate context labels thatpositively affect the ROI segmentation accuracy. Extensive experiments areconducted on several challenging segmentation tasks and datasets. The resultsdemonstrate that CoLab can guide the segmentation model to map the logits ofbackground samples away from the decision boundary, resulting in significantlyimproved segmentation accuracy. Code is available.
Li Z, Kamnitsas K, Ouyang C, et al., 2023, Context Label Learning: Improving Background Class Representations in Semantic Segmentation., IEEE Trans. Medical Imaging, Vol: 42, Pages: 1885-1896
Mackay K, Bernstein D, Glocker B, et al., 2023, A review of the metrics used to assess auto-contouring systems in radiotherapy, Clinical Oncology, Vol: 35, Pages: 354-369, ISSN: 0936-6555
Auto-contouring could revolutionise future planning of radiotherapy treatment. The lack of consensus on how to assess and validate auto-contouring systems currently limits clinical use. This review formally quantifies the assessment metrics used in studies published during one calendar year and assesses the need for standardised practice. A PubMed literature search was undertaken for papers evaluating radiotherapy auto-contouring published during 2021. Papers were assessed for types of metric and the methodology used to generate ground-truth comparators. Our PubMed search identified 212 studies, of which 117 met the criteria for clinical review. Geometric assessment metrics were used in 116 of 117 studies (99.1%). This includes the Dice Similarity Coefficient used in 113 (96.6%) studies. Clinically relevant metrics, such as qualitative, dosimetric and time-saving metrics, were less frequently used in 22 (18.8%), 27 (23.1%) and 18 (15.4%) of 117 studies, respectively. There was heterogeneity within each category of metric. Over 90 different names for geometric measures were used. Methods for qualitative assessment were different in all but two papers. Variation existed in the methods used to generate radiotherapy plans for dosimetric assessment. Consideration of editing time was only given in 11 (9.4%) papers. A single manual contour as a ground-truth comparator was used in 65 (55.6%) studies. Only 31 (26.5%) studies compared auto-contours to usual inter- and/or intra-observer variation. In conclusion, significant variation exists in how research papers currently assess the accuracy of automatically generated contours. Geometric measures are the most popular, however their clinical utility is unknown. There is heterogeneity in the methods used to perform clinical assessment. Considering the different stages of system implementation may provide a framework to decide the most appropriate metrics. This analysis supports the need for a consensus on the clinical implement
Xu M, Islam M, Glocker B, et al., 2023, Confidence-Aware Paced-Curriculum Learning by Label Smoothing for Surgical Scene Understanding, IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, ISSN: 1545-5955
Li L, Heselgrave A, Soreq E, et al., 2023, Investigating the characteristics and correlates of systemic inflammation after traumatic brain injury: the TBI-BraINFLAMM study, BMJ Open, Vol: 13, ISSN: 2044-6055
Introduction: A significant environmental risk factor for neurodegenerative disease is traumatic brain injury (TBI). However, it is not clear how TBI results in ongoing chronic neurodegeneration. Animal studies show that systemic inflammation is signalled to the brain. This can result in sustained and aggressive microglial activation, which in turn is associated with widespread neurodegeneration. We aim to evaluate systemic inflammation as a mediator of ongoing neurodegeneration after TBI.Methods and analysis: TBI-braINFLAMM will combine data already collected from two large prospective TBI studies. The CREACTIVE study, a broad consortium which enrolled >8000 patients with TBI to have CT scans and blood samples in the hyperacute period, has data available from 854 patients. The BIO-AX-TBI study recruited 311 patients to have acute CT scans, longitudinal blood samples and longitudinal MRI brain scans. The BIO-AX-TBI study also has data from 102 healthy and 24 non-TBI trauma controls, comprising blood samples (both control groups) and MRI scans (healthy controls only). All blood samples from BIO-AX-TBI and CREACTIVE have already been tested for neuronal injury markers (GFAP, tau and NfL), and CREACTIVE blood samples have been tested for inflammatory cytokines. We will additionally test inflammatory cytokine levels from the already collected longitudinal blood samples in the BIO-AX-TBI study, as well as matched microdialysate and blood samples taken during the acute period from a subgroup of patients with TBI (n=18).We will use this unique dataset to characterise post-TBI systemic inflammation, and its relationships with injury severity and ongoing neurodegeneration.Ethics and dissemination: Ethical approval for this study has been granted by the London—Camberwell St Giles Research Ethics Committee (17/LO/2066). Results will be submitted for publication in peer-review journals, presented at conferences and inform the design of larger observational and experime
Sharma N, Ng AY, James JJ, et al., 2023, Multi-vendor evaluation of artificial intelligence as an independent reader for double reading in breast cancer screening on 275,900 mammograms, BMC CANCER, Vol: 23
Ng AY, Glocker B, Oberije C, et al., 2023, Artificial intelligence as supporting reader in breast screening: a novel workflow to preserve quality and reduce workload, Journal of Breast Imaging, Vol: 5, Pages: 267-276, ISSN: 2631-6110
ObjectiveTo evaluate the effectiveness of a new strategy for using artificial intelligence (AI) as supporting reader for the detection of breast cancer in mammography-based double reading screening practice.MethodsLarge-scale multi-site, multi-vendor data were used to retrospectively evaluate a new paradigm of AI-supported reading. Here, the AI served as the second reader only if it agrees with the recall/no-recall decision of the first human reader. Otherwise, a second human reader made an assessment followed by the standard clinical workflow. The data included 280 594 cases from 180 542 female participants screened for breast cancer at seven screening sites in two countries and using equipment from four hardware vendors. The statistical analysis included non-inferiority and superiority testing of cancer screening performance and evaluation of the reduction in workload, measured as arbitration rate and number of cases requiring second human reading.ResultsArtificial intelligence as a supporting reader was found to be superior or noninferior on all screening metrics compared with human double reading while reducing the number of cases requiring second human reading by up to 87% (245 395/280 594). Compared with AI as an independent reader, the number of cases referred to arbitration was reduced from 13% (35 199/280 594) to 2% (5056/280 594).ConclusionThe simulation indicates that the proposed workflow retains screening performance of human double reading while substantially reducing the workload. Further research should study the impact on the second human reader because they would only assess cases in which the AI prediction and first human reader disagree.
Santhirasekaram A, Kori A, Winkler M, et al., 2023, Robust Hierarchical Symbolic Explanations in Hyperbolic Space for Image Classification, Computer Vision and Pattern Recognition
Glocker B, Jones C, Bernhardt M, et al., 2023, Algorithmic encoding of protected characteristics in chest X-ray disease detection models, EBioMedicine, Vol: 89, Pages: 1-19, ISSN: 2352-3964
BackgroundIt has been rightfully emphasized that the use of AI for clinical decision making could amplify health disparities. An algorithm may encode protected characteristics, and then use this information for making predictions due to undesirable correlations in the (historical) training data. It remains unclear how we can establish whether such information is actually used. Besides the scarcity of data from underserved populations, very little is known about how dataset biases manifest in predictive models and how this may result in disparate performance. This article aims to shed some light on these issues by exploring methodology for subgroup analysis in image-based disease detection models.MethodsWe utilize two publicly available chest X-ray datasets, CheXpert and MIMIC-CXR, to study performance disparities across race and biological sex in deep learning models. We explore test set resampling, transfer learning, multitask learning, and model inspection to assess the relationship between the encoding of protected characteristics and disease detection performance across subgroups.FindingsWe confirm subgroup disparities in terms of shifted true and false positive rates which are partially removed after correcting for population and prevalence shifts in the test sets. We find that transfer learning alone is insufficient for establishing whether specific patient information is used for making predictions. The proposed combination of test-set resampling, multitask learning, and model inspection reveals valuable insights about the way protected characteristics are encoded in the feature representations of deep neural networks.InterpretationSubgroup analysis is key for identifying performance disparities of AI models, but statistical differences across subgroups need to be taken into account when analyzing potential biases in disease detection. The proposed methodology provides a comprehensive framework for subgroup analysis enabling further research into the underlyi
Menten MJ, Holland R, Leingang O, et al., 2023, Exploring healthy retinal aging with deep learning, Ophthalmology Science, Vol: 3, Pages: 1-10, ISSN: 2666-9145
PurposeTo study the individual course of retinal changes caused by healthy aging using deep learning.DesignRetrospective analysis of a large data set of retinal OCT images.ParticipantsA total of 85 709 adults between the age of 40 and 75 years of whom OCT images were acquired in the scope of the UK Biobank population study.MethodsWe created a counterfactual generative adversarial network (GAN), a type of neural network that learns from cross-sectional, retrospective data. It then synthesizes high-resolution counterfactual OCT images and longitudinal time series. These counterfactuals allow visualization and analysis of hypothetical scenarios in which certain characteristics of the imaged subject, such as age or sex, are altered, whereas other attributes, crucially the subject’s identity and image acquisition settings, remain fixed.Main Outcome MeasuresUsing our counterfactual GAN, we investigated subject-specific changes in the retinal layer structure as a function of age and sex. In particular, we measured changes in the retinal nerve fiber layer (RNFL), combined ganglion cell layer plus inner plexiform layer (GCIPL), inner nuclear layer to the inner boundary of the retinal pigment epithelium (INL-RPE), and retinal pigment epithelium (RPE).ResultsOur counterfactual GAN is able to smoothly visualize the individual course of retinal aging. Across all counterfactual images, the RNFL, GCIPL, INL-RPE, and RPE changed by −0.1 μm ± 0.1 μm, −0.5 μm ± 0.2 μm, −0.2 μm ± 0.1 μm, and 0.1 μm ± 0.1 μm, respectively, per decade of age. These results agree well with previous studies based on the same cohort from the UK Biobank population study. Beyond population-wide average measures, our counterfactual GAN allows us to explore whether the retinal layers of a given eye will increase in thickness, decrease in thickness, or stagnate as a subject ages.ConclusionThis study demonstrates how counterfactual GANs
Monteiro M, De Sousa Ribeiro F, Pawlowski N, et al., 2023, Measuring axiomatic soundness of counterfactual image models, International Conference on Learning Representations (ICLR)
We use the axiomatic definition of counterfactual to derive metrics that enable quantifying the correctness of approximate counterfactual inference models.Abstract: We present a general framework for evaluating image counterfactuals. The power and flexibility of deep generative models make them valuable tools for learning mechanisms in structural causal models. However, their flexibility makes counterfactual identifiability impossible in the general case.Motivated by these issues, we revisit Pearl's axiomatic definition of counterfactuals to determine the necessary constraints of any counterfactual inference model: composition, reversibility, and effectiveness. We frame counterfactuals as functions of an input variable, its parents, and counterfactual parents and use the axiomatic constraints to restrict the set of functions that could represent the counterfactual, thus deriving distance metrics between the approximate and ideal functions. We demonstrate how these metrics can be used to compare and choose between different approximate counterfactual inference models and to provide insight into a model's shortcomings and trade-offs.
Pati S, Baid U, Edwards B, et al., 2023, Author Correction: Federated learning enables big data for rare cancer boundary detection., Nature Communications, Vol: 14, Pages: 436-436, ISSN: 2041-1723
Batten J, Sinclair M, Glocker B, et al., 2023, Image To Tree with Recursive Prompting
Extracting complex structures from grid-based data is a common key step inautomated medical image analysis. The conventional solution to recoveringtree-structured geometries typically involves computing the minimal cost paththrough intermediate representations derived from segmentation masks. However,this methodology has significant limitations in the context of projectiveimaging of tree-structured 3D anatomical data such as coronary arteries, sincethere are often overlapping branches in the 2D projection. In this work, wepropose a novel approach to predicting tree connectivity structure whichreformulates the task as an optimization problem over individual steps of arecursive process. We design and train a two-stage model which leverages theUNet and Transformer architectures and introduces an image-based promptingtechnique. Our proposed method achieves compelling results on a pair ofsynthetic datasets, and outperforms a shortest-path baseline.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.