Imperial College London

DrWenjiaBai

Faculty of MedicineDepartment of Brain Sciences

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 8291w.bai Website

 
 
//

Location

 

Room 212, Data Science InstituteWilliam Penney LaboratorySouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Ouyang:2022:10.1109/tmi.2022.3224067,
author = {Ouyang, C and Chen, C and Li, S and Li, Z and Qin, C and Bai, W and Rueckert, D},
doi = {10.1109/tmi.2022.3224067},
journal = {IEEE Transactions on Medical Imaging},
pages = {1095--1106},
title = {Causality-inspired single-source domain generalization for medical image segmentation},
url = {http://dx.doi.org/10.1109/tmi.2022.3224067},
volume = {42},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Deep learning models usually suffer from the domain shift issue, where models trained on one source domain do not generalize well to other unseen domains. In this work, we investigate the single-source domain generalization problem: training a deep network that is robust to unseen domains, under the condition that training data are only available from one source domain, which is common in medical imaging applications. We tackle this problem in the context of cross-domain medical image segmentation. In this scenario, domain shifts are mainly caused by different acquisition processes. We propose a simple causality-inspired data augmentation approach to expose a segmentation model to synthesized domain-shifted training examples. Specifically, 1) to make the deep model robust to discrepancies in image intensities and textures, we employ a family of randomly-weighted shallow networks. They augment training images using diverse appearance transformations. 2) Further we show that spurious correlations among objects in an image are detrimental to domain robustness. These correlations might be taken by the network as domain-specific clues for making predictions, and they may break on unseen domains. We remove these spurious correlations via causal intervention. This is achieved by resampling the appearances of potentially correlated objects independently. The proposed approach is validated on three cross-domain segmentation scenarios: cross-modality (CT-MRI) abdominal image segmentation, cross-sequence (bSSFP-LGE) cardiac MRI segmentation, and cross-site prostate MRI segmentation. The proposed approach yields consistent performance gains compared with competitive methods when tested on unseen domains.
AU - Ouyang,C
AU - Chen,C
AU - Li,S
AU - Li,Z
AU - Qin,C
AU - Bai,W
AU - Rueckert,D
DO - 10.1109/tmi.2022.3224067
EP - 1106
PY - 2022///
SN - 0278-0062
SP - 1095
TI - Causality-inspired single-source domain generalization for medical image segmentation
T2 - IEEE Transactions on Medical Imaging
UR - http://dx.doi.org/10.1109/tmi.2022.3224067
UR - https://ieeexplore.ieee.org/document/9961940
UR - http://hdl.handle.net/10044/1/101099
VL - 42
ER -