Imperial College London

ProfessorDanielRueckert

Faculty of EngineeringDepartment of Computing

Professor of Visual Information Processing
 
 
 
//

Contact

 

+44 (0)20 7594 8333d.rueckert Website

 
 
//

Location

 

568Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Chen:2022:10.1007/978-3-031-16443-9_15,
author = {Chen, C and Li, Z and Ouyang, C and Sinclair, M and Bai, W and Rueckert, D},
doi = {10.1007/978-3-031-16443-9_15},
pages = {151--161},
publisher = {Springer},
title = {MaxStyle: adversarial style composition for robust medical image segmentation},
url = {http://dx.doi.org/10.1007/978-3-031-16443-9_15},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Convolutional neural networks (CNNs) have achieved remarkable segmentationaccuracy on benchmark datasets where training and test sets are from the samedomain, yet their performance can degrade significantly on unseen domains,which hinders the deployment of CNNs in many clinical scenarios. Most existingworks improve model out-of-domain (OOD) robustness by collecting multi-domaindatasets for training, which is expensive and may not always be feasible due toprivacy and logistical issues. In this work, we focus on improving modelrobustness using a single-domain dataset only. We propose a novel dataaugmentation framework called MaxStyle, which maximizes the effectiveness ofstyle augmentation for model OOD performance. It attaches an auxiliarystyle-augmented image decoder to a segmentation network for robust featurelearning and data augmentation. Importantly, MaxStyle augments data withimproved image style diversity and hardness, by expanding the style space withnoise and searching for the worst-case style composition of latent features viaadversarial training. With extensive experiments on multiple public cardiac andprostate MR datasets, we demonstrate that MaxStyle leads to significantlyimproved out-of-distribution robustness against unseen corruptions as well ascommon distribution shifts across multiple, different, unseen sites and unknownimage sequences under both low- and high-training data settings. The code canbe found at https://github.com/cherise215/MaxStyle.
AU - Chen,C
AU - Li,Z
AU - Ouyang,C
AU - Sinclair,M
AU - Bai,W
AU - Rueckert,D
DO - 10.1007/978-3-031-16443-9_15
EP - 161
PB - Springer
PY - 2022///
SP - 151
TI - MaxStyle: adversarial style composition for robust medical image segmentation
UR - http://dx.doi.org/10.1007/978-3-031-16443-9_15
UR - http://arxiv.org/abs/2206.01737v1
UR - http://hdl.handle.net/10044/1/97660
ER -