Imperial College London

ProfessorAndreaRockall

Faculty of MedicineDepartment of Surgery & Cancer

Clinical Chair in Radiology
 
 
 
//

Contact

 

a.rockall

 
 
//

Location

 

ICTEM buildingHammersmith Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Valindria:2018:10.1109/WACV.2018.00066,
author = {Valindria, V and Pawlowski, N and Rajchl, M and Lavdas, I and Aboagye, EO and Rockall, A and Rueckert, D and Glocker, B},
doi = {10.1109/WACV.2018.00066},
publisher = {IEEE},
title = {Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI},
url = {http://dx.doi.org/10.1109/WACV.2018.00066},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - Convolutional neural networks have been widely used in medical image segmentation. The amount of training data strongly determines the overall performance. Most approaches are applied for a single imaging modality, e.g., brain MRI. In practice, it is often difficult to acquire sufficient training data of a certain imaging modality. The same anatomical structures, however, may be visible in different modalities such as major organs on abdominal CT and MRI. In this work, we investigate the effectiveness of learning from multiple modalities to improve the segmentation accuracy on each individual modality. We study the feasibility of using a dual-stream encoder-decoder architecture to learn modality-independent, and thus, generalisable and robust features. All of our MRI and CT data are unpaired, which means they are obtained from different subjects and not registered to each other. Experiments show that multi-modal learning can improve overall accuracy over modality-specific training. Results demonstrate that information across modalities can in particular improve performance on varying structures such as the spleen.
AU - Valindria,V
AU - Pawlowski,N
AU - Rajchl,M
AU - Lavdas,I
AU - Aboagye,EO
AU - Rockall,A
AU - Rueckert,D
AU - Glocker,B
DO - 10.1109/WACV.2018.00066
PB - IEEE
PY - 2018///
TI - Multi-modal learning from unpaired images: Application to multi-organ segmentation in CT and MRI
UR - http://dx.doi.org/10.1109/WACV.2018.00066
UR - http://hdl.handle.net/10044/1/56452
ER -