Imperial College London

DR BERNHARD KAINZ

Faculty of EngineeringDepartment of Computing

Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 8349b.kainz Website CV

 
 
//

Location

 

372Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

80 results found

Alansary A, Rajchl M, McDonagh SG, Murgasova M, Damodaram M, Lloyd DFA, Davidson A, Rutherford M, Hajnal JV, Rueckert D, Kainz Bet al., 2017, PVR: Patch-to-Volume Reconstruction for Large Area Motion Correction of Fetal MRI, IEEE TRANSACTIONS ON MEDICAL IMAGING, Vol: 36, Pages: 2031-2044, ISSN: 0278-0062

JOURNAL ARTICLE

Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Zemrak F, Fung K, Paiva JM, Carapella V, Kim YJ, Suzuki H, Kainz B, Matthews PM, Petersen SE, Piechnik SK, Neubauer S, Glocker B, Rueckert Det al., 2017, Human-level CMR image analysis with deep fully convolutional networks

Cardiovascular magnetic resonance (CMR) imaging is a standard imagingmodality for assessing cardiovascular diseases (CVDs), the leading cause ofdeath globally. CMR enables accurate quantification of the cardiac chambervolume, ejection fraction and myocardial mass, providing a wealth ofinformation for sensitive and specific diagnosis and monitoring of CVDs.However, for years, clinicians have been relying on manual approaches for CMRimage analysis, which is time consuming and prone to subjective errors. It is amajor clinical challenge to automatically derive quantitative and clinicallyrelevant information from CMR images. Deep neural networks have shown a greatpotential in image pattern recognition and segmentation for a variety of tasks.Here we demonstrate an automated analysis method for CMR images, which is basedon a fully convolutional network (FCN). The network is trained and evaluated ona dataset of unprecedented size, consisting of 4,875 subjects with 93,500pixelwise annotated images, which is by far the largest annotated CMR dataset.By combining FCN with a large-scale annotated dataset, we show for the firsttime that an automated method achieves a performance on par with human expertsin analysing CMR images and deriving clinical measures. We anticipate this tobe a starting point for automated and comprehensive CMR analysis withhuman-level performance, facilitated by machine learning. It is an importantadvance on the pathway towards computer-assisted CVD assessment.

WORKING PAPER

Baumgartner CF, Kamnitsas K, Matthew J, Fletcher TP, Smith S, Koch LM, Kainz B, Rueckert Det al., 2017, SonoNet: Real-Time Detection and Localisation of Fetal Standard Scan Planes in Freehand Ultrasound, IEEE TRANSACTIONS ON MEDICAL IMAGING, Vol: 36, Pages: 2204-2215, ISSN: 0278-0062

JOURNAL ARTICLE

Cerrolaza JJ, Oktay O, Gomez A, Matthew J, Knight C, Kainz B, Rueckert Det al., 2017, Fetal skull segmentation in 3D ultrasound via structured geodesic random forest, Fetal, Infant and Ophthalmic Medical Image Analysis: International Workshop, FIFI 2017, and 4th International Workshop, OMIA 2017, Held in Conjunction with MICCAI 2017, Pages: 25-32, ISSN: 0302-9743

© Springer International Publishing AG 2017. Ultrasound is the primary imaging method for prenatal screening and diagnosis of fetal anomalies. Thanks to its non-invasive and non-ionizing properties, ultrasound allows quick, safe and detailed evaluation of the unborn baby, including the estimation of the gestational age, brain and cranium development. However, the accuracy of traditional 2D fetal biometrics is dependent on operator expertise and subjectivity in 2D plane finding and manual marking. 3D ultrasound has the potential to reduce the operator dependence. In this paper, we propose a new random forest-based segmentation framework for fetal 3D ultrasound volumes, able to efficiently integrate semantic and structural information in the classification process. We introduce a new semantic features space able to encode spatial context via generalized geodesic distance transform. Unlike alternative auto-context approaches, this new set of features is efficiently integrated into the same forest using contextual trees. Finally, we use a new structured labels space as alternative to the traditional atomic class labels, able to capture morphological variability of the target organ. Here, we show the potential of this new general framework segmenting the skull in 3D fetal ultrasound volumes, significantly outperforming alternative random forest-based approaches.

CONFERENCE PAPER

Hou B, Alansary A, McDonagh S, Davidson A, Rutherford M, Hajnal JV, Rueckert D, Glocker B, Kainz Bet al., 2017, Predicting slice-to-volume transformation in presence of arbitrary subject motion, Pages: 296-304, ISSN: 0302-9743

© Springer International Publishing AG 2017. This paper aims to solve a fundamental problem in intensity-based 2D/3D registration, which concerns the limited capture range and need for very good initialization of state-of-the-art image registration methods. We propose a regression approach that learns to predict rotations and translations of arbitrary 2D image slices from 3D volumes, with respect to a learned canonical atlas co-ordinate system. To this end, we utilize Convolutional Neural Networks (CNNs) to learn the highly complex regression function that maps 2D image slices into their correct position and orientation in 3D space. Our approach is attractive in challenging imaging scenarios, where significant subject motion complicates reconstruction performance of 3D volumes from 2D slice data. We extensively evaluate the effectiveness of our approach quantitatively on simulated MRI brain data with extreme random motion. We further demonstrate qualitative results on fetal MRI where our method is integrated into a full reconstruction and motion compensation pipeline. With our CNN regression approach we obtain an average prediction error of 7 mm on simulated data, and convincing reconstruction quality of images of very young fetuses where previous methods fail. We further discuss applications to Computed Tomography (CT) and X-Ray projections. Our approach is a general solution to the 2D/3D initialization problem. It is computationally efficient, with prediction times per slice of a few milliseconds, making it suitable for real-time scenarios.

CONFERENCE PAPER

Kainz B, Bhatia K, Vaillant G, Zuluaga MAet al., 2017, Preface, ISBN: 9783319522791

BOOK

Kainz B, Bhatia K, Vercauteren T, 2017, Preface RAMBO 2017, ISBN: 9783319675633

BOOK

Kamnitsas K, Bai W, Ferrante E, McDonagh S, Sinclair M, Pawlowski N, Rajchl M, Lee M, Kainz B, Rueckert D, Glocker Bet al., 2017, Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation

Deep learning approaches such as convolutional neural nets have consistentlyoutperformed previous methods on challenging tasks such as dense, semanticsegmentation. However, the various proposed networks perform differently, withbehaviour largely influenced by architectural choices and training settings.This paper explores Ensembles of Multiple Models and Architectures (EMMA) forrobust performance through aggregation of predictions from a wide range ofmethods. The approach reduces the influence of the meta-parameters ofindividual models and the risk of overfitting the configuration to a particulardatabase. EMMA can be seen as an unbiased, generic deep learning model which isshown to yield excellent performance, winning the first position in the BRATS2017 competition among 50+ participating teams.

WORKING PAPER

McDonagh S, Hou B, Alansary A, Oktay O, Kamnitsas K, Rutherford M, Hajnal JV, Kainz Bet al., 2017, Context-sensitive super-resolution for fast fetal magnetic resonance imaging, Pages: 116-126, ISSN: 0302-9743

© 2017, Springer International Publishing AG. 3D Magnetic Resonance Imaging (MRI) is often a trade-off between fast but low-resolution image acquisition and highly detailed but slow image acquisition. Fast imaging is required for targets that move to avoid motion artefacts. This is in particular difficult for fetal MRI. Spatially independent upsampling techniques, which are the state-of-the-art to address this problem, are error prone and disregard contextual information. In this paper we propose a context-sensitive upsampling method based on a residual convolutional neural network model that learns organ specific appearance and adopts semantically to input data allowing for the generation of high resolution images with sharp edges and fine scale detail. By making contextual decisions about appearance and shape, present in different parts of an image, we gain a maximum of structural detail at a similar contrast as provided by high-resolution data. We experiment on 145 fetal scans and show that our approach yields an increased PSNR of 1.25 dB when applied to under-sampled fetal data cf. baseline upsampling. Furthermore, our method yields an increased PSNR of 1.73 dB when utilizing under-sampled fetal data to perform brain volume reconstruction on motion corrupted captured data.

CONFERENCE PAPER

Miao H, Mistelbauer G, Karimov A, Alansary A, Davidson A, Lloyd DFA, Damodaram M, Story L, Hutter J, Hajnal JV, Rutherford M, Preim B, Kainz B, Groeller MEet al., 2017, Placenta Maps: In Utero Placental Health Assessment of the Human Fetus, IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, Vol: 23, Pages: 1612-1623, ISSN: 1077-2626

The human placenta is essential for the supply of the fetus. To monitor the fetal development, imaging data is acquired usingultrasound (US). Although it is currently the gold-standard in fetal imaging, it might not capture certain abnormalities of the placenta.Magnetic resonance imaging (MRI) is a safe alternative for the in utero examination while acquiring the fetus data in higher detail.Nevertheless, there is currently no established procedure for assessing the condition of the placenta and consequently the fetal health.Due to maternal respiration and inherent movements of the fetus during examination, a quantitative assessment of the placenta requiresfetal motion compensation, precise placenta segmentation and a standardized visualization, which are challenging tasks. Utilizingadvanced motion compensation and automatic segmentation methods to extract the highly versatile shape of the placenta, we introducea novel visualization technique that presents the fetal and maternal side of the placenta in a standardized way. Our approach enablesphysicians to explore the placenta even in utero. This establishes the basis for a comparative assessment of multiple placentas to analyzepossible pathologic arrangements and to support the research and understanding of this vital organ. Additionally, we propose athree-dimensional structure-aware surface slicing technique in order to explore relevant regions inside the placenta. Finally, to survey theapplicability of our approach, we consulted clinical experts in prenatal diagnostics and i

JOURNAL ARTICLE

Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook S, de Marvao A, Dawes T, O'Regan D, Kainz B, Glocker B, Rueckert Det al., 2017, Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation.

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac datasets and public benchmarks. Additionally, we demonstrate how the learnt deep models of 3D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.

WORKING PAPER

Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook S, de Marvao A, Dawes T, O'Regan D, Kainz B, Glocker B, Rueckert Det al., 2017, Anatomically Constrained Neural Networks (ACNN): Application to Cardiac Image Enhancement and Segmentation, IEEE Transactions on Medical Imaging, ISSN: 0278-0062

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac datasets and public benchmarks. Additionally, we demonstrate how the learnt deep models of 3D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.

JOURNAL ARTICLE

Pawlowski N, Ktena SI, Lee MCH, Kainz B, Rueckert D, Glocker B, Rajchl Met al., 2017, DLTK: State of the Art Reference Implementations for Deep Learning on Medical Images

We present DLTK, a toolkit providing baseline implementations for efficientexperimentation with deep learning methods on biomedical images. It builds ontop of TensorFlow and its high modularity and easy-to-use examples allow for alow-threshold access to state-of-the-art implementations for typical medicalimaging problems. A comparison of DLTK's reference implementations of popularnetwork architectures for image segmentation demonstrates new top performanceon the publicly available challenge data "Multi-Atlas Labeling Beyond theCranial Vault". The average test Dice similarity coefficient of $81.5$ exceedsthe previously best performing CNN ($75.7$) and the accuracy of the challengewinning method ($79.0$).

WORKING PAPER

Rajchl M, Lee MCH, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W, Damodaram M, Rutherford MA, Hajnal JV, Kainz B, Rueckert Det al., 2017, DeepCut: Object Segmentation From Bounding Box Annotations Using Convolutional Neural Networks, IEEE TRANSACTIONS ON MEDICAL IMAGING, Vol: 36, Pages: 674-683, ISSN: 0278-0062

JOURNAL ARTICLE

Toisoul A, Rueckert D, Kainz B, 2017, Accessible GLSL Shader programming, EuroGraphics 2017, ISSN: 1017-4656

Teaching fundamental principles of Computer Graphics requires a thoroughly prepared lecture alongside practical training.Modern graphics programming rarely provides a straightforward application programming interface (API) and the availableAPIs pose high entry barriers to students. Shader-based programming of standard graphics pipelines is often inaccessiblethrough complex setup procedures and convoluted programming environments. In this paper we discuss an undergraduateentry level lecture with its according lab exercises. We present a programming framework that makes interactive graphicsprogramming accessible while allowing to design individual tasks as instructive exercises to solidify the content of individuallecture units. The discussed teaching framework provides a well defined programmable graphics pipeline with geometry shadingstages and image-based post processing functionality based on framebuffer objects. It is open-source and available online.

CONFERENCE PAPER

Alansary A, Kamnitsas K, Davidson A, Khlebnikov R, Rajchl M, Malamateniou C, Rutherford M, Hajnal JV, Glocker B, Rueckert D, Kainz Bet al., 2016, Fast fully automatic segmentation of the human placenta from motion corrupted MRI, Pages: 589-597, ISSN: 0302-9743

© Springer International Publishing AG 2016. Recently,magnetic resonance imaging has revealed to be important for the evaluation of placenta’s health during pregnancy. Quantitative assessment of the placenta requires a segmentation,which proves to be challenging because of the high variability of its position,orientation,shape and appearance. Moreover,image acquisition is corrupted by motion artifacts from both fetal and maternal movements. In this paper we propose a fully automatic segmentation framework of the placenta from structural T2-weighted scans of the whole uterus,as well as an extension in order to provide an intuitive pre-natal view into this vital organ. We adopt a 3D multi-scale convolutional neural network to automatically identify placental candidate pixels. The resulting classification is subsequently refined by a 3D dense conditional random field,so that a high resolution placental volume can be reconstructed from multiple overlapping stacks of slices. Our segmentation framework has been tested on 66 subjects at gestational ages 20–38 weeks achieving a Dice score of 71.95 ± 19.79% for healthy fetuses with a fixed scan sequence and 66.89 ± 15.35% for a cohort mixed with cases of intrauterine fetal growth restriction using varying scan parameters.

CONFERENCE PAPER

Baumgartner CF, Kamnitsas K, Matthew J, Smith S, Kainz B, Rueckert Det al., 2016, Real-time standard scan plane detection and localisation in fetal ultrasound using fully convolutional neural networks, Pages: 203-211, ISBN: 9783319467221

© Springer International Publishing AG 2016. Fetal mid-pregnancy scans are typically carried out according to fixed protocols. Accurate detection of abnormalities and correct biometric measurements hinge on the correct acquisition of clearly defined standard scan planes. Locating these standard planes requires a high level of expertise. However,there is a worldwide shortage of expert sonographers. In this paper,we consider a fully automated system based on convolutional neural networks which can detect twelve standard scan planes as defined by the UK fetal abnormality screening programme. The network design allows real-time inference and can be naturally extended to provide an approximate localisation of the fetal anatomy in the image. Such a framework can be used to automate or assist with scan plane selection,or for the retrospective retrieval of scan planes from recorded videos. The method is evaluated on a large database of 1003 volunteer mid-pregnancy scans. We show that standard planes acquired in a clinical scenario are robustly detected with a precision and recall of 69% and 80%,which is superior to the current state-of-the-art. Furthermore,we show that it can retrospectively retrieve correct scan planes with an accuracy of 71% for cardiac views and 81% for non-cardiac views.

BOOK CHAPTER

Baumgartner CH, Kamnitsas K, Matthew J, Smith S, Kainz B, Rueckert Det al., 2016, Real-time standard scan plane detection and localisation in fetal ultrasound using fully convolutional neural networks, International Conference on Medical Image Computing and Computer-Assisted Intervention MICCAI 206, Publisher: Springer, Pages: 203-211

Fetal mid-pregnancy scans are typically carried out accordingto fixed protocols. Accurate detection of abnormalities and correctbiometric measurements hinge on the correct acquisition of clearlydefined standard scan planes. Locating these standard planes requires ahigh level of expertise. However, there is a worldwide shortage of expertsonographers. In this paper, we consider a fully automated system basedon convolutional neural networks which can detect twelve standard scanplanes as defined by the UK fetal abnormality screening programme. Thenetwork design allows real-time inference and can be naturally extendedto provide an approximate localisation of the fetal anatomy in the image.Such a framework can be used to automate or assist with scan planeselection, or for the retrospective retrieval of scan planes from recordedvideos. The method is evaluated on a large database of 1003 volunteermid-pregnancy scans. We show that standard planes acquired in a clinicalscenario are robustly detected with a precision and recall of 69 %and 80 %, which is superior to the current state-of-the-art. Furthermore,we show that it can retrospectively retrieve correct scan planes with anaccuracy of 71 % for cardiac views and 81 % for non-cardiac views.

CONFERENCE PAPER

Kainz B, Alansary A, McDonagh ST, Keraudren K, Kuklisova-Murgasova Met al., 2016, Fast motion compensation and super-resolution from multiple stacks of 2D slices

This tool implements a novel method for the correction of motion artifacts as acquired in fetal Magnetic Resonance Imaging (MRI) scans of the whole uterus. Contrary to current slice-to-volume registration (SVR) methods, requiring an inflexible enclosure of a single investigated organ, the proposed patch-to-volume reconstruction (PVR) approach is able to reconstruct a large field of view of non-rigidly deforming structures. It relaxes rigid motion assumptions by introducing a defined amount of redundant information that is addressed with parallelized patch-wise optimization and automatic outlier rejection. We further describe and provide an efficient parallel implementation of PVR allowing its execution within reasonable time on commercially available graphics processing units (GPU), enabling its use in the clinical practice. We evaluate PVR’s computational overhead compared to standard methods and observe improved reconstruction accuracy in presence of affine motion artifacts of approximately 30% compared to conventional SVR in synthetic experiments.Furthermore, we have verified our method qualitatively and quantitatively on real fetal MRI data subject to maternal breathing and sudden fetal movements. We evaluate peak-signal-to-noise ratio (PSNR), structural similarity index (SSIM), and cross correlation (CC) with respect to the originally acquired data and provide a method for visual inspection of reconstruction uncertainty. With these experiments we demonstrate successful application of PVR motion compensation to the whole uterus, the human fetus, and the human placenta.

SOFTWARE

Kainz B, Toisoul A, 2016, ShaderLab Framework

ShaderLab is a teaching tool to solidify the fundamentals of Computer Graphics. The ShaderLab framework is based on Qt5, CMake, OpenGL 4.0, and GLSL and allows the student to modify GLSL shaders in an IDE-like environment. The framework is able to render shaded polyhedral geometry (.off/.obj), supports image-based post-processing, and allows to implement simple ray-tracing algorithms. This tool will be intensively tested by 140 CO317 Computer Graphics students in Spring 2017.

SOFTWARE

Lloyd D, Kainz B, van Amerom JF, Lohezic M, Pushparajah K, Simpson JM, Malamateniou C, Hajnal JV, Rutherford M, Razavi Ret al., 2016, Prenatal MRI visualisation of the aortic arch and fetal vasculature using motion-corrected slice-to-volume reconstruction, Journal of Cardiovascular Magnetic Resonance, Vol: 18, Pages: P180-P180, ISSN: 1532-429X

JOURNAL ARTICLE

Rajchl M, Lee M, Oktay O, Kamnitsas K, Passerat-Palmbach J, Bai W, Kainz B, Rueckert Det al., 2016, DeepCut: Object Segmentation from Bounding Box Annotations using Convolutional Neural Networks, Publisher: arXiv:1605.07866

In this paper, we propose DeepCut, a method to obtain pixelwise object segmentations given an image dataset labelled with bounding box annotations. It extends the approach of the well-known GrabCut method to include machine learning by training a neural network classifier from bounding box annotations. We formulate the problem as an energy minimisation problem over a densely-connected conditional random field and iteratively update the training targets to obtain pixelwise object segmentations. Additionally, we propose variants of the DeepCut method and compare those to a naive approach to CNN training under weak supervision. We test its applicability to solve brain and lung segmentation problems on a challenging fetal magnetic resonance dataset and obtain encouraging results in terms of accuracy.

OTHER

Rajchl M, Lee MCH, Schrans F, Davidson A, Passerat-Palmbach J, Tarroni G, Alansary A, Oktay O, Kainz B, Rueckert Det al., 2016, Learning under Distributed Weak Supervision

The availability of training data for supervision is a frequently encounteredbottleneck of medical image analysis methods. While typically established by aclinical expert rater, the increase in acquired imaging data renderstraditional pixel-wise segmentations less feasible. In this paper, we examinethe use of a crowdsourcing platform for the distribution of super-pixel weakannotation tasks and collect such annotations from a crowd of non-expertraters. The crowd annotations are subsequently used for training a fullyconvolutional neural network to address the problem of fetal brain segmentationin T2-weighted MR images. Using this approach we report encouraging resultscompared to highly targeted, fully supervised methods and potentially address afrequent problem impeding image analysis research.

OTHER

Rueckert D, Glocker B, Kainz B, 2016, Learning clinically useful information from images: Past, present and future, MEDICAL IMAGE ANALYSIS, Vol: 33, Pages: 13-18, ISSN: 1361-8415

JOURNAL ARTICLE

Steinnberger M, Kenzel M, Kainz B, 2016, ScatterAlloc

ScatterAlloc is a dynamic memory allocator for the GPU. It is designed concerning the requirements of massively parallel execution. ScatterAlloc greatly reduces collisions and congestion by scattering memory requests based on hashing. It can deal with thousands of GPU-threads concurrently allocating memory and its execution time is almost independent of the thread count. ScatterAlloc is open source and easy to use in your CUDA projects.

SOFTWARE

Alansary A, Lee M, Keraudren K, Kainz B, Malamateniou C, Rutherford M, Hajnal JV, Glocker B, Rueckert Det al., 2015, Automatic Brain Localization in Fetal MRI Using Superpixel Graphs, 1st International Workshop on Medical Learning Meets Medical Imaging (MLMMI), Publisher: SPRINGER INT PUBLISHING AG, Pages: 13-22, ISSN: 0302-9743

CONFERENCE PAPER

Bowles C, Nowlan NC, Hayat TTA, Malamateniou C, Rutherford M, Hajnal JV, Rueckert D, Kainz Bet al., 2015, Machine learning for the automatic localisation of foetal body parts in cine-MRI scans, Conference on Medical Imaging - Image Processing, Publisher: SPIE-INT SOC OPTICAL ENGINEERING, ISSN: 0277-786X

CONFERENCE PAPER

Egger J, Busse H, Brandmaier P, Seider D, Gawlitza M, Strocka S, Voglreiter P, Dokter M, Hofmann M, Kainz B, Chen X, Hann A, Boechat P, Yu W, Freisleben B, Alhonnoro T, Pollari M, Moche M, Schmalstieg Det al., 2015, RFA-Cut: Semi-automatic Segmentation of Radiofrequency Ablation Zones with and without Needles via Optimal s-t-Cuts, 37th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Publisher: IEEE, Pages: 2423-2429, ISSN: 1557-170X

CONFERENCE PAPER

Egger J, Busse H, Brandmaier P, Seider D, Gawlitza M, Strocka S, Voglreiter P, Dokter M, Hofmann M, Kainz B, Hann A, Chen X, Alhonnoro T, Pollari M, Schmalstieg D, Moche Met al., 2015, Interactive Volumetry Of Liver Ablation Zones, SCIENTIFIC REPORTS, Vol: 5, ISSN: 2045-2322

JOURNAL ARTICLE

Kainz B, Alansary A, Malamateniou C, Keraudren K, Rutherford M, Hajnal JV, Rueckert Det al., 2015, Flexible Reconstruction and Correction of Unpredictable Motion from Stacks of 2D Images, 18th International Conference on Medical Image Computing and Computer-Assisted Intervention (MICCAI), Publisher: SPRINGER INT PUBLISHING AG, Pages: 555-562, ISSN: 0302-9743

CONFERENCE PAPER

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00646162&limit=30&person=true