Imperial College London

DR BERNHARD KAINZ

Faculty of EngineeringDepartment of Computing

Reader in Medical Image Computing
 
 
 
//

Contact

 

+44 (0)20 7594 8349b.kainz Website CV

 
 
//

Location

 

372Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

202 results found

Lloyd DFA, Pushparajah K, Simpson JM, van Amerom JFP, van Poppel MPM, Schulz A, Kainz B, Deprez M, Lohezic M, Allsop J, Mathur S, Bellsham-Revell H, Vigneswaran T, Charakida M, Miller O, Zidere V, Sharland G, Rutherford M, Hajnal JV, Razavi Ret al., 2019, Three-dimensional visualisation of the fetal heart using prenatal MRI with motion-corrected slice-volume registration: a prospective, single-centre cohort study, The Lancet, Vol: 393, Pages: 1619-1627, ISSN: 0140-6736

BackgroundTwo-dimensional (2D) ultrasound echocardiography is the primary technique used to diagnose congenital heart disease before birth. There is, however, a longstanding need for a reliable form of secondary imaging, particularly in cases when more detailed three-dimensional (3D) vascular imaging is required, or when ultrasound windows are of poor diagnostic quality. Fetal MRI, which is well established for other organ systems, is highly susceptible to fetal movement, particularly for 3D imaging. The objective of this study was to investigate the combination of prenatal MRI with novel, motion-corrected 3D image registration software, as an adjunct to fetal echocardiography in the diagnosis of congenital heart disease.MethodsPregnant women carrying a fetus with known or suspected congenital heart disease were recruited via a tertiary fetal cardiology unit. After initial validation experiments to assess the general reliability of the approach, MRI data were acquired in 85 consecutive fetuses, as overlapping stacks of 2D images. These images were then processed with a bespoke open-source reconstruction algorithm to produce a super-resolution 3D volume of the fetal thorax. These datasets were assessed with measurement comparison with paired 2D ultrasound, structured anatomical assessment of the 2D and 3D data, and contemporaneous, archived clinical fetal MRI reports, which were compared with postnatal findings after delivery.FindingsBetween Oct 8, 2015, and June 30, 2017, 101 patients were referred for MRI, of whom 85 were eligible and had fetal MRI. The mean gestational age at the time of MRI was 32 weeks (range 24–36). High-resolution (0·50–0·75 mm isotropic) 3D datasets of the fetal thorax were generated in all 85 cases. Vascular measurements showed good overall agreement with 2D echocardiography in 51 cases with paired data (intra-class correlation coefficient 0·78, 95% CI 0·68–0·84), with fetal vascular struc

Journal article

Alansary A, Oktay O, Li Y, Folgoc LL, Hou B, Vaillant G, Kamnitsas K, Vlontzos A, Glocker B, Kainz B, Rueckert Det al., 2019, Evaluating reinforcement learning agents for anatomical landmark detection, Medical Image Analysis, Vol: 53, Pages: 156-164, ISSN: 1361-8415

Automatic detection of anatomical landmarks is an important step for a wide range of applications in medical image analysis. Manual annotation of landmarks is a tedious task and prone to observer errors. In this paper, we evaluate novel deep reinforcement learning (RL) strategies to train agents that can precisely and robustly localize target landmarks in medical scans. An artificial RL agent learns to identify the optimal path to the landmark by interacting with an environment, in our case 3D images. Furthermore, we investigate the use of fixed- and multi-scale search strategies with novel hierarchical action steps in a coarse-to-fine manner. Several deep Q-network (DQN) architectures are evaluated for detecting multiple landmarks using three different medical imaging datasets: fetal head ultrasound (US), adult brain and cardiac magnetic resonance imaging (MRI). The performance of our agents surpasses state-of-the-art supervised and RL methods. Our experiments also show that multi-scale search strategies perform significantly better than fixed-scale agents in images with large field of view and noisy background such as in cardiac MRI. Moreover, the novel hierarchical steps can significantly speed up the searching process by a factor of 4-5 times.

Journal article

Schlemper J, Oktay O, Schaap M, Heinrich M, Kainz B, Glocker B, Rueckert Det al., 2019, Attention gated networks: Learning to leverage salient regions in medical images, MEDICAL IMAGE ANALYSIS, Vol: 53, Pages: 197-207, ISSN: 1361-8415

Journal article

Robinson R, Valindria VV, Bai W, Oktay O, Kainz B, Suzuki H, Sanghvi MM, Aung N, Paiva JÉM, Zemrak F, Fung K, Lukaschuk E, Lee AM, Carapella V, Kim YJ, Piechnik SK, Neubauer S, Petersen SE, Page C, Matthews PM, Rueckert D, Glocker Bet al., 2019, Automated quality control in image segmentation: application to the UK Biobank cardiac MR imaging study, Journal of Cardiovascular Magnetic Resonance, Vol: 21, ISSN: 1097-6647

Background: The trend towards large-scale studies including population imaging poses new challenges in terms of quality control (QC). This is a particular issue when automatic processing tools, e.g. image segmentation methods, are employed to derive quantitative measures or biomarkers for later analyses. Manual inspection and visual QC of each segmentation isn't feasible at large scale. However, it's important to be able to automatically detect when a segmentation method fails so as to avoid inclusion of wrong measurements into subsequent analyses which could lead to incorrect conclusions. Methods: To overcome this challenge, we explore an approach for predicting segmentation quality based on Reverse Classification Accuracy, which enables us to discriminate between successful and failed segmentations on a per-cases basis. We validate this approach on a new, large-scale manually-annotated set of 4,800 cardiac magnetic resonance scans. We then apply our method to a large cohort of 7,250 cardiac MRI on which we have performed manual QC. Results: We report results used for predicting segmentation quality metrics including Dice Similarity Coefficient (DSC) and surface-distance measures. As initial validation, we present data for 400 scans demonstrating 99% accuracy for classifying low and high quality segmentations using predicted DSC scores. As further validation we show high correlation between real and predicted scores and 95% classification accuracy on 4,800 scans for which manual segmentations were available. We mimic real-world application of the method on 7,250 cardiac MRI where we show good agreement between predicted quality metrics and manual visual QC scores. Conclusions: We show that RCA has the potential for accurate and fully automatic segmentation QC on a per-case basis in the context of large-scale population imaging as in the UK Biobank Imaging Study.

Journal article

Grzech D, Folgoc LL, Heinrich MP, Khanal B, Moll J, Schnabel JA, Glocker B, Kainz Bet al., 2019, FastReg: Fast Non-Rigid Registration via Accelerated Optimisation on the Manifold of Diffeomorphisms

We present a new approach to diffeomorphic non-rigid registration of medicalimages. The method is based on optical flow and warps images via gradient flowwith the standard $L^2$ inner product. To compute the transformation, we relyon accelerated optimisation on the manifold of diffeomorphisms. We achieveregularity properties of Sobolev gradient flows, which are expensive tocompute, owing to a novel method of averaging the gradients in time rather thanspace. We successfully register brain MRI and challenging abdominal CT scans atspeeds orders of magnitude faster than previous approaches. We make our codeavailable in a public repository: https://github.com/dgrzech/fastreg

Working paper

Sinclair M, Baumgartner CF, Matthew J, Bai W, Martinez JC, Li Y, Smith S, Knight CL, Kainz B, Hajnal J, King AP, Rueckert Det al., 2018, Human-level performance on automatic head biometrics in fetal ultrasound using fully convolutional neural networks, International Engineering in Medicine and Biology Conference, Pages: 714-717

Measurement of head biometrics from fetal ultrasonography images is of keyimportance in monitoring the healthy development of fetuses. However, theaccurate measurement of relevant anatomical structures is subject to largeinter-observer variability in the clinic. To address this issue, an automatedmethod utilizing Fully Convolutional Networks (FCN) is proposed to determinemeasurements of fetal head circumference (HC) and biparietal diameter (BPD). AnFCN was trained on approximately 2000 2D ultrasound images of the head withannotations provided by 45 different sonographers during routine screeningexaminations to perform semantic segmentation of the head. An ellipse is fittedto the resulting segmentation contours to mimic the annotation typicallyproduced by a sonographer. The model's performance was compared withinter-observer variability, where two experts manually annotated 100 testimages. Mean absolute model-expert error was slightly better thaninter-observer error for HC (1.99mm vs 2.16mm), and comparable for BPD (0.61mmvs 0.59mm), as well as Dice coefficient (0.980 vs 0.980). Our resultsdemonstrate that the model performs at a level similar to a human expert, andlearns to produce accurate predictions from a large dataset annotated by manysonographers. Additionally, measurements are generated in near real-time at15fps on a GPU, which could speed up clinical workflow for both skilled andtrainee sonographers.

Conference paper

Lloyd DFA, Pushparajah K, Simpson JM, van Amerom JFP, can Poppel M, Schulz A, Kainz B, Kuklisova-Murgasova M, Lohezic M, Allsop J, Mathur S, Bellsham-Revell H, Vigneswaran T, Charakida M, Miller O, Zidere V, Sharland G, Rutherford M, Hajnal J, Razavi Ret al., 2018, Three-dimensional visualisation of the fetal heart using prenatal MRI with motion corrected slice-volume registration, The Lancet, ISSN: 0140-6736

Journal article

Cerrolaza JJ, Li Y, Biffi C, Gomez A, Sinclair M, Matthew J, Knight C, Kainz B, Rueckert Det al., 2018, 3D fetal skull reconstruction from 2DUS via deep conditional generative networks, International Conference on Medical Image Computing and Computer-Assisted Intervention, Pages: 383-391, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. 2D ultrasound (US) is the primary imaging modality in antenatal healthcare. Despite the limitations of traditional 2D biometrics to characterize the true 3D anatomy of the fetus, the adoption of 3DUS is still very limited. This is particularly significant in developing countries and remote areas, due to the lack of experienced sonographers and the limited access to 3D technology. In this paper, we present a new deep conditional generative network for the 3D reconstruction of the fetal skull from 2DUS standard planes of the head routinely acquired during the fetal screening process. Based on the generative properties of conditional variational autoencoders (CVAE), our reconstruction architecture (REC-CVAE) directly integrates the three US standard planes as conditional variables to generate a unified latent space of the skull. Additionally, we propose HiREC-CVAE, a hierarchical generative network based on the different clinical relevance of each predictive view. The hierarchical structure of HiREC-CVAE allows the network to learn a sequence of nested latent spaces, providing superior predictive capabilities even in the absence of some of the 2DUS scans. The performance of the proposed architectures was evaluated on a dataset of 72 cases, showing accurate reconstruction capabilities from standard non-registered 2DUS images.

Conference paper

Tanno R, Makropoulos A, Arslan S, Oktay O, Mischkewitz S, Al-Noor F, Oppenheimer J, Mandegaran R, Kainz B, Heinrich MPet al., 2018, AutoDVT: Joint real-time classification for vein compressibility analysis in deep vein thrombosis ultrasound diagnostics, International Conference on Medical Image Computing and Computer-Assisted Intervention, Pages: 905-912, ISSN: 0302-9743

© Springer Nature Switzerland AG 2018. We propose a dual-task convolutional neural network (CNN) to fully automate the real-time diagnosis of deep vein thrombosis (DVT). DVT can be reliably diagnosed through evaluation of vascular compressibility at anatomically defined landmarks in streams of ultrasound (US) images. The combined real-time evaluation of these tasks has never been achieved before. As proof-of-concept, we evaluate our approach on two selected landmarks of the femoral vein, which can be identified with high accuracy by our approach. Our CNN is able to identify if a vein fully compresses with a F1 score of more than 90% while applying manual pressure with the ultrasound probe. Fully compressible veins robustly rule out DVT and such patients do not need to be referred to further specialist examination. We have evaluated our method on 1150 5–10 s compression image sequences from 115 healthy volunteers, which results in a data set size of approximately 200k labelled images. Our method yields a theoretical inference frame rate of more than 500 fps and we thoroughly evaluate the performance of 15 possible configurations.

Conference paper

Li Y, Alansary A, Cerrolaza J, Khanal B, Sinclair M, Matthew J, Gupta C, Knight C, Kainz B, Rueckert Det al., 2018, Fast multiple landmark localisation using a patch-based iterative network, 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Publisher: Springer Verlag, ISSN: 0302-9743

We propose a new Patch-based Iterative Network (PIN) for fast and accuratelandmark localisation in 3D medical volumes. PIN utilises a ConvolutionalNeural Network (CNN) to learn the spatial relationship between an image patchand anatomical landmark positions. During inference, patches are repeatedlypassed to the CNN until the estimated landmark position converges to the truelandmark location. PIN is computationally efficient since the inference stageonly selectively samples a small number of patches in an iterative fashionrather than a dense sampling at every location in the volume. Our approachadopts a multi-task learning framework that combines regression andclassification to improve localisation accuracy. We extend PIN to localisemultiple landmarks by using principal component analysis, which models theglobal anatomical relationships between landmarks. We have evaluated PIN using72 3D ultrasound images from fetal screening examinations. PIN achievesquantitatively an average landmark localisation error of 5.59mm and a runtimeof 0.44s to predict 10 landmarks per volume. Qualitatively, anatomical 2Dstandard scan planes derived from the predicted landmark locations are visuallysimilar to the clinical ground truth.

Conference paper

Li Y, Khanal B, Hou B, Alansary A, Cerrolaza J, Sinclair M, Matthew J, Gupta C, Knight C, Kainz B, Rueckert Det al., 2018, Standard plane detection in 3D fetal ultrasound using an iterative transformation network, 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Publisher: Springer Verlag, Pages: 392-400, ISSN: 0302-9743

Standard scan plane detection in fetal brain ultrasound (US) forms a crucialstep in the assessment of fetal development. In clinical settings, this is doneby manually manoeuvring a 2D probe to the desired scan plane. With the adventof 3D US, the entire fetal brain volume containing these standard planes can beeasily acquired. However, manual standard plane identification in 3D volume islabour-intensive and requires expert knowledge of fetal anatomy. We propose anew Iterative Transformation Network (ITN) for the automatic detection ofstandard planes in 3D volumes. ITN uses a convolutional neural network to learnthe relationship between a 2D plane image and the transformation parametersrequired to move that plane towards the location/orientation of the standardplane in the 3D volume. During inference, the current plane image is passediteratively to the network until it converges to the standard plane location.We explore the effect of using different transformation representations asregression outputs of ITN. Under a multi-task learning framework, we introduceadditional classification probability outputs to the network to act asconfidence measures for the regressed transformation parameters in order tofurther improve the localisation accuracy. When evaluated on 72 US volumes offetal brain, our method achieves an error of 3.83mm/12.7 degrees and3.80mm/12.6 degrees for the transventricular and transcerebellar planesrespectively and takes 0.46s per plane.

Conference paper

Hou B, Miolane N, Khanal B, Lee M, Alansary A, McDonagh SG, Hajnal JV, Rueckert D, Glocker B, Kainz Bet al., 2018, Computing CNN loss and gradients for pose estimation with Riemannian geometry, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 756-764, ISSN: 0302-9743

Pose estimation, i.e. predicting a 3D rigid transformation with respect to a fixed co-ordinate frame in, SE(3), is an omnipresent problem in medical image analysis. Deep learning methods often parameterise poses with a representation that separates rotation and translation.As commonly available frameworks do not provide means to calculate loss on a manifold, regression is usually performed using the L2-norm independently on the rotation’s and the translation’s parameterisations. This is a metric for linear spaces that does not take into account the Lie group structure of SE(3). In this paper, we propose a general Riemannian formulation of the pose estimation problem, and train CNNs directly on SE(3) equipped with a left-invariant Riemannian metric. The loss between the ground truth and predicted pose (elements of the manifold) is calculated as the Riemannian geodesic distance, which couples together the translation and rotation components. Network weights are updated by back-propagating the gradient with respect to the predicted pose on the tangent space of the manifold SE(3). We thoroughly evaluate the effectiveness of our loss function by comparing its performance with popular and most commonly used existing methods, on tasks such as image-based localisation and intensity-based 2D/3D registration. We also show that hyper-parameters, used in our loss function to weight the contribution between rotations andtranslations, can be intrinsically calculated from the dataset to achievegreater performance margins.

Conference paper

Alansary A, Le Folgoc L, Vaillant G, Oktay O, Li Y, Bai W, Passerat-Palmbach J, Guerrero R, Kamnitsas K, Hou B, McDonagh S, Glocker B, Kainz B, Rueckert Det al., 2018, Automatic view planning with multi-scale deep reinforcement learning agents, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 277-285, ISSN: 0302-9743

We propose a fully automatic method to find standardizedview planes in 3D image acquisitions. Standard view images are impor-tant in clinical practice as they provide a means to perform biometricmeasurements from similar anatomical regions. These views are often constrained to the native orientation of a 3D image acquisition. Navigating through target anatomy to find the required view plane is tedious and operator-dependent. For this task, we employ a multi-scale reinforcement learning (RL) agent framework and extensively evaluate several DeepQ-Network (DQN) based strategies. RL enables a natural learning paradigm by interaction with the environment, which can be used to mimic experienced operators. We evaluate our results using the distance between the anatomical landmarks and detected planes, and the angles between their normal vector and target. The proposed algorithm is assessed on the mid-sagittal and anterior-posterior commissure planes of brain MRI, and the 4-chamber long-axis plane commonly used in cardiac MRI, achieving accuracy of 1.53mm, 1.98mm and 4.84mm, respectively.

Conference paper

Meng Q, Baumgartner C, Sinclair M, Housden J, Rajchl M, Gomez A, Hou B, Toussaint N, Zimmer V, Tan J, Matthew J, Rueckert D, Schnabel J, Kainz Bet al., 2018, Automatic shadow detection in 2D ultrasound images, International Workshop on Preterm, Perinatal and Paediatric Image Analysis, Pages: 66-75, ISSN: 0302-9743

Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise, shadow-focused confidence map from weakly labelled, anatomically-focused images. Our method: (1) initializes potential shadow areas based on a classification task. (2) extends potential shadow areas using a GAN model. (3) adds intensity information to generate the final confidence map using a distance matrix. The proposed method accurately highlights the shadow areas in 2D ultrasound datasets comprising standard view planes as acquired during fetal screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.

Conference paper

Bai W, Sinclair M, Tarroni G, Oktay O, Rajchl M, Vaillant G, Lee AM, Aung N, Lukaschuk E, Sanghvi MM, Zemrak F, Fung K, Paiva JM, Carapella V, Kim YJ, Suzuki H, Kainz B, Matthews PM, Petersen SE, Piechnik SK, Neubauer S, Glocker B, Rueckert Det al., 2018, Automated cardiovascular magnetic resonance image analysis with fully convolutional networks, Journal of Cardiovascular Magnetic Resonance, Vol: 20, Pages: 1-12, ISSN: 1097-6647

Background: Cardiovascular magnetic resonance (CMR) imaging is a standard imaging modality for assessing cardiovascular diseases (CVDs), the leading cause of death globally. CMR enables accurate quantification of the cardiac chamber volume, ejection fraction and myocardial mass, providing information for diagnosis and monitoring of CVDs. However, for years, clinicians have been relying on manual approaches for CMR imageanalysis, which is time consuming and prone to subjective errors. It is a major clinical challenge to automatically derive quantitative and clinically relevant information from CMR images.Methods: Deep neural networks have shown a great potential in image pattern recognition and segmentation for a variety of tasks. Here we demonstrate an automated analysis method for CMR images, which is based on a fully convolutional network (FCN). The network is trained and evaluated on a large-scale dataset from the UK Biobank, consisting of 4,875 subjects with 93,500 pixelwise annotated images. The performance of the method has been evaluated using a number of technical metrics, including the Dice metric, mean contour distance and Hausdorff distance, as well as clinically relevant measures, including left ventricle (LV)end-diastolic volume (LVEDV) and end-systolic volume (LVESV), LV mass (LVM); right ventricle (RV) end-diastolic volume (RVEDV) and end-systolic volume (RVESV).Results: By combining FCN with a large-scale annotated dataset, the proposed automated method achieves a high performance in segmenting the LV and RV on short-axis CMR images and the left atrium (LA) and right atrium (RA) on long-axis CMR images. On a short-axis image test set of 600 subjects, it achieves an average Dice metric of 0.94 for the LV cavity, 0.88 for the LV myocardium and 0.90 for the RV cavity. The meanabsolute difference between automated measurement and manual measurement was 6.1 mL for LVEDV, 5.3 mL for LVESV, 6.9 gram for LVM, 8.5 mL for RVEDV and 7.2 mL for RVESV. On long-ax

Journal article

Robinson R, Oktay O, Bai W, Valindria V, Sanghvi MM, Aung N, Paiva JM, Zemrak F, Fung K, Lukaschuk E, Lee AM, Carapella V, Kim YJ, Kainz B, Piechnik SK, Neubauer S, Petersen SE, Page C, Rueckert D, Glocker Bet al., 2018, Real-time prediction of segmentation quality, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer Verlag, Pages: 578-585, ISSN: 0302-9743

Recent advances in deep learning based image segmentationmethods have enabled real-time performance with human-level accuracy.However, occasionally even the best method fails due to low image qual-ity, artifacts or unexpected behaviour of black box algorithms. Beingable to predict segmentation quality in the absence of ground truth is ofparamount importance in clinical practice, but also in large-scale studiesto avoid the inclusion of invalid data in subsequent analysis.In this work, we propose two approaches of real-time automated qualitycontrol for cardiovascular MR segmentations using deep learning. First,we train a neural network on 12,880 samples to predict Dice SimilarityCoefficients (DSC) on a per-case basis. We report a mean average error(MAE) of 0.03 on 1,610 test samples and 97% binary classification accu-racy for separating low and high quality segmentations. Secondly, in thescenario where no manually annotated data is available, we train a net-work to predict DSC scores from estimated quality obtained via a reversetesting strategy. We report an MAE = 0.14 and 91% binary classifica-tion accuracy for this case. Predictions are obtained in real-time which,when combined with real-time segmentation methods, enables instantfeedback on whether an acquired scan is analysable while the patient isstill in the scanner. This further enables new applications of optimisingimage acquisition towards best possible analysis results.

Conference paper

Verbruggen S, Kainz B, Shelmerdine SC, Arthurs OJ, Hajnal JV, Rutherford M, Phillips AT, Nowlan Net al., 2018, Altered biomechanical stimulation of the developing hip joint in presence of hip dysplasia risk factors, Journal of Biomechanics, Vol: 78, Pages: 1-9, ISSN: 0021-9290

Fetal kicking and movements generate biomechanical stimulation in the fetal skeleton, which is important for prenatal musculoskeletal development, particularly joint shape. Developmental dysplasia of the hip (DDH) is the most common joint shape abnormality at birth, with many risk factors for the condition being associated with restricted fetal movement. In this study, we investigate the biomechanics of fetal movements in such situations, namely fetal breech position, oligohydramnios and primiparity (firstborn pregnancy). We also investigate twin pregnancies, which are not at greater risk of DDH incidence, despite the more restricted intra-uterine environment. We track fetal movements for each of these situations using cine-MRI technology, quantify the kick and muscle forces, and characterise the resulting stress and strain in the hip joint, testing the hypothesis that altered biomechanical stimuli may explain the link between certain intra-uterine conditions and risk of DDH. Kick force, stress and strain were found to be significantly lower in cases of breech position and oligohydramnios. Similarly, firstborn fetuses were found to generate significantly lower kick forces than non-firstborns. Interestingly, no significant difference was observed in twins compared to singletons. This research represents the first evidence of a link between the biomechanics of fetal movements and the risk of DDH, potentially informing the development of future preventative measures and enhanced diagnosis. Our results emphasise the importance of ultrasound screening for breech position and oligohydramnios, particularly later in pregnancy, and suggest that earlier intervention to correct breech position through external cephalic version could reduce the risk of hip dysplasia.

Journal article

Hou B, Khanal B, Alansary A, McDonagh S, Davidson A, Rutherford M, Hajnal J, Rueckert D, Glocker B, Kainz Bet al., 2018, 3D reconstruction in canonical co-ordinate space from arbitrarily oriented 2D images, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 1737-1750, ISSN: 0278-0062

Limited capture range, and the requirement to provide high quality initialization for optimization-based 2D/3D image registration methods, can significantly degrade the performance of 3D image reconstruction and motion compensation pipelines. Challenging clinical imaging scenarios, which contain significant subject motion such as fetal in-utero imaging, complicate the 3D image and volume reconstruction process. In this paper we present a learning based image registration method capable of predicting 3D rigid transformations of arbitrarily oriented 2D image slices, with respect to a learned canonical atlas co-ordinate system. Only image slice intensity information is used to perform registration and canonical alignment, no spatial transform initialization is required. To find image transformations we utilize a Convolutional Neural Network (CNN) architecture to learn the regression function capable of mapping 2D image slices to a 3D canonical atlas space. We extensively evaluate the effectiveness of our approach quantitatively on simulated Magnetic Resonance Imaging (MRI), fetal brain imagery with synthetic motion and further demonstrate qualitative results on real fetal MRI data where our method is integrated into a full reconstruction and motion compensation pipeline. Our learning based registration achieves an average spatial prediction error of 7 mm on simulated data and produces qualitatively improved reconstructions for heavily moving fetuses with gestational ages of approximately 20 weeks. Our model provides a general and computationally efficient solution to the 2D/3D registration initialization problem and is suitable for real-time scenarios.

Journal article

Oktay O, Schlemper J, Folgoc LL, Lee MCH, Heinrich MP, Misawa K, Mori K, McDonagh SG, Hammerla NY, Kainz B, Glocker B, Rueckert Det al., 2018, Attention U-Net: Learning Where to Look for the Pancreas., International Conference on Medical Imaging with Deep Learning (MIDL)

Conference paper

Schlemper J, Oktay O, Chen L, Matthew J, Knight CL, Kainz B, Glocker B, Rueckert Det al., 2018, Attention-Gated Networks for Improving Ultrasound Scan Plane Detection., International Conference on Medical Imaging with Deep Learning (MIDL)

Conference paper

Hou B, Kainz B, 2018, DeepPose

A general Riemannian formulation of the pose estimation problem to train CNNs directly on SE(3) equipped with a left-invariant Riemannian metric.

Software

Otkay O, Schlemper J, Kainz B, 2018, Attention Gates in a Convolutional Neural Network / Medical Image Classification and Segmentation

Pytorch implementation of attention gates used in U-Net and VGG-16 models. The framework can be utilised in both medical image classification and segmentation tasks.

Software

Kamnitsas K, Bai W, Ferrante E, McDonagh SG, Sinclair M, Pawlowski N, Rajchl M, Lee M, Kainz B, Rueckert D, Glocker Bet al., 2018, Ensembles of Multiple Models and Architectures for Robust Brain Tumour Segmentation, MICCAI BrainLes Workshop

Conference paper

Oktay O, Ferrante E, Kamnitsas K, Heinrich M, Bai W, Caballero J, Cook S, de Marvao A, Dawes T, O'Regan D, Kainz B, Glocker B, Rueckert Det al., 2018, Anatomically Constrained Neural Networks (ACNN): application to cardiac image enhancement and segmentation, IEEE Transactions on Medical Imaging, Vol: 37, Pages: 384-395, ISSN: 0278-0062

Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvious how to incorporate such prior knowledge. State-of-the-art methods operate as pixel-wise classifiers where the training objectives do not incorporate the structure and inter-dependencies of the output. To overcome this limitation, we propose a generic training strategy that incorporates anatomical prior knowledge into CNNs through a new regularisation model, which is trained end-to-end. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non-linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) and improve the prediction accuracy of the state-of-the-art models. The applicability of our approach is shown on multi-modal cardiac datasets and public benchmarks. Additionally, we demonstrate how the learnt deep models of 3D shapes can be interpreted and used as biomarkers for classification of cardiac pathologies.

Journal article

Lloyd DFA, van Poppel M, Schultz A, Pushparajah K, Simpson J, van Amerom JFP, Kainz B, Kuklisova-Murgasova M, Vigneswaran T, Charakida M, Miller O, Zidere V, Sharland G, Rutherford M, Hajnal J, Razavi Ret al., 2018, MOTION CORRECTED FETAL CARDIAC MRI INCREASES DIAGNOSTIC CONFIDENCE IN CLINICALLY CHALLENGING CASES, Annual Meeting of the British-Congenital-Cardiac-Association, Publisher: BMJ PUBLISHING GROUP, Pages: A11-A11, ISSN: 1355-6037

Conference paper

Lloyd DFA, Poppel MV, Schultz A, Pushparajah K, Simpson J, Amerom JFPV, Kainz B, Kuklisova-Murgasova M, Vigneswaran T, Charakida M, Miller O, Zidere V, Sharland G, Rutherford M, Hajnal J, Razavi Ret al., 2018, 31 Motion corrected fetal cardiac mri increases diagnostic confidence in clinically challenging cases, British Congenital Cardiac Association, Annual meeting abstracts 9–10 November 2017, Great Ormond Street Institute of Child Health, London, UK, Publisher: BMJ Publishing Group Ltd and British Cardiovascular Society

Conference paper

Verbruggen S, Kainz B, Shelmerdine S, Hajnal J, Rutherford M, Arthurs O, Phillips A, Nowlan NCet al., 2018, Stresses and strains on the human fetal skeleton during development, Journal of the Royal Society Interface, Vol: 15, Pages: 1-11, ISSN: 1742-5662

Mechanical forces generated by fetal kicks and movements result in stimulation of the fetal skeleton in the form of stress and strain. This stimulation is known to be critical for prenatal musculoskeletal development; indeed, abnormal or absent movements have been implicated in multiple congenital disorders. However, the mechanical stress and strain experienced by the developing human skeleton in utero have never before been characterized. Here, we quantify the biomechanics of fetal movements during the second half of gestation by modelling fetal movements captured using novel cine-magnetic resonance imaging technology. By tracking these movements, quantifying fetal kick and muscle forces, and applying them to three-dimensional geometries of the fetal skeleton, we test the hypothesis that stress and strain change over ontogeny. We find that fetal kick force increases significantly from 20 to 30 weeks' gestation, before decreasing towards term. However, stress and strain in the fetal skeleton rises significantly over the latter half of gestation. This increasing trend with gestational age is important because changes in fetal movement patterns in late pregnancy have been linked to poor fetal outcomes and musculoskeletal malformations. This research represents the first quantification of kick force and mechanical stress and strain due to fetal movements in the human skeleton in utero, thus advancing our understanding of the biomechanical environment of the uterus. Further, by revealing a potential link between fetal biomechanics and skeletal malformations, our work will stimulate future research in tissue engineering and mechanobiology.

Journal article

Mischkewitz S, Kainz B, 2018, AutoDVT Real-time Classification for Vein Compressibility Analysis in Deep Vein Thrombosis Ultrasound Diagnostics, Pages: 99-104, ISSN: 2191-1665

Conference paper

Kainz B, Bhatia K, Vercauteren T, Oktay Oet al., 2018, RAMBO 2018 Preface, ISBN: 9783030009458

Book

Khanal B, Gomez A, Toussaint N, McDonagh S, Zimmer V, Skelton E, Matthew J, Grzech D, Wright R, Gupta C, Hou B, Rueckert D, Schnabel JA, Kainz Bet al., 2018, EchoFusion: Tracking and Reconstruction of Objects in 4D Freehand Ultrasound Imaging Without External Trackers, Publisher: SPRINGER INTERNATIONAL PUBLISHING AG

Working paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00646162&limit=30&person=true&page=4&respub-action=search.html