Imperial College London

DR BERNHARD KAINZ

Faculty of EngineeringDepartment of Computing

Reader in Medical Image Computing
 
 
 
//

Contact

 

+44 (0)20 7594 8349b.kainz Website CV

 
 
//

Location

 

372Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

202 results found

Keraudren K, Kainz B, Oktay O, Kyriakopoulou V, Rutherford M, Hajnal J, Rueckert Det al., 2015, Automated localization of fetal organs in MRI using random forests with steerable features, Medical Image Computing and Computer-Assisted Intervention — MICCAI 2015, Publisher: Springer, Pages: 620-627, ISSN: 0302-9743

Fetal MRI is an invaluable diagnostic tool complementary to ultrasound thanks to its high contrast and resolution. Motion artifacts and the arbitrary orientation of the fetus are two main challenges of fetal MRI. In this paper, we propose a method based on Random Forests with steerable features to automatically localize the heart, lungs and liver in fetal MRI. During training, all MR images are mapped into a standard coordinate system that is defined by landmarks on the fetal anatomy and normalized for fetal age. Image features are then extracted in this coordinate system. During testing, features are computed for different orientations with a search space constrained by previously detected landmarks. The method was tested on healthy fetuses as well as fetuses with intrauterine growth restriction (IUGR) from 20 to 38 weeks of gestation. The detection rate was above 90% for all organs of healthy fetuses in the absence of motion artifacts. In the presence of motion, the detection rate was 83% for the heart, 78% for the lungs and 67% for the liver. Growth restriction did not decrease the performance of the heart detection but had an impact on the detection of the lungs and liver. The proposed method can be used to initialize subsequent processing steps such as segmentation or motion correction, as well as automatically orient the 3D volume based on the fetal anatomy to facilitate clinical examination.

Conference paper

Kainz B, Steinberger M, Wein W, Murgasova M, Malamateniou C, Keraudren K, Aljabar P, Rutherford M, Hajnal J, Rueckert Det al., 2015, Fast volume reconstruction from motion corrupted stacks of 2D slices, IEEE Transactions on Medical Imaging, Vol: 34, Pages: 1901-1913, ISSN: 0278-0062

Capturing an enclosing volume of moving subjects and organs using fast individual image slice acquisition has shown promise in dealing with motion artefacts. Motion between slice acquisitions results in spatial inconsistencies that can be resolved by slice-to-volume reconstruction (SVR) methods to provide high quality 3D image data. Existing algorithms are, however, typically very slow, specialised to specific applications and rely on approximations, which impedes their potential clinical use. In this paper, we present a fast multi-GPU accelerated framework for slice-to-volume reconstruction. It is based on optimised 2D/3D registration, super-resolution with automatic outlier rejection and an additional (optional) intensity bias correction. We introduce a novel and fully automatic procedure for selecting the image stack with least motion to serve as an initial registration target. We evaluate the proposed method using artificial motion corrupted phantom data as well as clinical data, including tracked freehand ultrasound of the liver and fetal Magnetic Resonance Imaging. We achieve speed-up factors greater than 30 compared to a single CPU system and greater than 10 compared to currently available state-of-the-art multi-core CPU methods. We ensure high reconstruction accuracy by exact computation of the point-spread function for every input data point, which has not previously been possible due to computational limitations. Our framework and its implementation is scalable for available computational infrastructures and tests show a speed-up factor of 1.70 for each additional GPU. This paves the way for the online application of image based reconstruction methods during clinical examinations. The source code for the proposed approach is publicly available.

Journal article

Kainz B, Malamateniou C, Ferrazzi G, Murgasova M, Egger J, Keraudren K, Rutherford M, Hajnal JV, Rueckert Det al., 2015, Adaptive scan strategies for fetal MRI imaging using slice to volume techniques, 2015 IEEE 12th International Symposium on Biomedical Imaging (ISBI), Publisher: IEEE, Pages: 849-852, ISSN: 1945-7928

In this paper several novel methods to account for fetal movements during fetal Magnetic Resonance Imaging (fetal MRI) are explored. We show how slice-to-volume reconstruction methods can be used to account for motion adaptively during the scan. Three candidate methods are tested for their feasibility and integrated into a computer simulation of fetal MRI. The first alters the main orientation of the stacks used for reconstruction, the second stops if too much motion occurs during slice acquisition and the third steers the orientation of each slice individually. Reconstruction informed adaptive scanning can provide a peak signal-to-noise ratio (PSNR) improvement of up to 2 dB after only two stacks of scanned slices and is more efficient with respect to the uncertainty of the final reconstruction.

Conference paper

Mühl J, Köstenbauer S, Seise M, Kainz B, Stiegler P, Mayrhauser U, Portugaller Het al., 2015, Fusion von CT Volumen und histologischen Schnitten orientiert an natürlichen Merkmalspunkten

Other

Bowles C, Nowlan NC, Hayat TTA, Malamateniou C, Rutherford M, Hajnal JV, Rueckert D, Kainz Bet al., 2015, Machine learning for the automatic localisation of foetal body parts in cine-MRI scans, Medical Imaging 2015: Image Processing, Publisher: Society of Photo-optical Instrumentation Engineers (SPIE), ISSN: 0277-786X

Conference paper

Egger J, Busse H, Brandmaier P, Seider D, Gawlitza M, Strocka S, Voglreiter P, Dokter M, Hofmann M, Kainz B, Chen X, Hann A, Boechat P, Yu W, Freisleben B, Alhonnoro T, Pollari M, Moche M, Schmalstieg Det al., 2015, RFA-Cut: Semi-automatic Segmentation of Radiofrequency Ablation Zones with and without Needles via Optimal s-t-Cuts, 37th Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Publisher: IEEE, Pages: 2423-2429, ISSN: 1557-170X

Conference paper

Keraudren K, Kuklisova-Murgasova M, Kyriakopoulou V, Malamateniou C, Rutherford MA, Kainz B, Hajnal JV, Rueckert Det al., 2014, Automated fetal brain segmentation from 2D MRI slices for motion correction, Neuroimage, Vol: 101, Pages: 633-643, ISSN: 1095-9572

Motion correction is a key element for imaging the fetal brain in-utero using Magnetic Resonance Imaging (MRI). Maternal breathing can introduce motion, but a larger effect is frequently due to fetal movement within the womb. Consequently, imaging is frequently performed slice-by-slice using single shot techniques, which are then combined into volumetric images using slice-to-volume reconstruction methods (SVR). For successful SVR, a key preprocessing step is to isolate fetal brain tissues from maternal anatomy before correcting for the motion of the fetal head. This has hitherto been a manual or semi-automatic procedure. We propose an automatic method to localize and segment the brain of the fetus when the image data is acquired as stacks of 2D slices with anatomy misaligned due to fetal motion. We combine this segmentation process with a robust motion correction method, enabling the segmentation to be refined as the reconstruction proceeds. The fetal brain localization process uses Maximally Stable Extremal Regions (MSER), which are classified using a Bag-of-Words model with Scale-Invariant Feature Transform (SIFT) features. The segmentation process is a patch-based propagation of the MSER regions selected during detection, combined with a Conditional Random Field (CRF). The gestational age (GA) is used to incorporate prior knowledge about the size and volume of the fetal brain into the detection and segmentation process. The method was tested in a ten-fold cross-validation experiment on 66 datasets of healthy fetuses whose GA ranged from 22 to 39 weeks. In 85% of the tested cases, our proposed method produced a motion corrected volume of a relevant quality for clinical diagnosis, thus removing the need for manually delineating the contours of the brain before motion correction. Our method automatically generated as a side-product a segmentation of the reconstructed fetal brain with a mean Dice score of 93%, which can be used for further processing.

Journal article

Khlebnikov R, Voglreiter P, Steinberger M, Kainz B, Schmalstieg Det al., 2014, Parallel Irradiance Caching for Interactive Monte-Carlo Direct Volume Rendering

Journal article

Steinberger M, Kenzel M, Kainz B, Müller J, Wonka P, Schmalstieg Det al., 2014, Parallel Generation of Architecture on the GPU, Computer Graphics Forum, Vol: 33, Pages: 73-82, ISSN: 1467-8659

In this paper, we present a novel approach for the parallel evaluation of procedural shape grammars on the graphics processing unit (GPU). Unlike previous approaches that are either limited in the kind of shapes they allow, the amount of parallelism they can take advantage of, or both, our method supports state of the art procedural modeling including stochasticity and context-sensitivity. To increase parallelism, we explicitly express independence in the grammar, reduce inter-rule dependencies required for context-sensitive evaluation, and introduce intra-rule parallelism. Our rule scheduling scheme avoids unnecessary back and forth between CPU and GPU and reduces round trips to slow global memory by dynamically grouping rules in on-chip shared memory. Our GPU shape grammar implementation is multiple orders of magnitude faster than the standard in CPU-based rule evaluation, while offering equal expressive power. In comparison to the state of the art in GPU shape grammar derivation, our approach is nearly 50 times faster, while adding support for geometric context-sensitivity.

Journal article

Steinberger M, Kenzel M, Kainz B, Wonka P, Schmalstieg Det al., 2014, On-the-fly Generation and Rendering of Infinite Cities on the GPU, Computer Graphics Forum, Vol: 33, Pages: 105-114, ISSN: 1467-8659

In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene's derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed.

Journal article

Kuster C, Bazin J-C, Oztireli C, Deng T, Martin T, Popa T, Gross Met al., 2014, Spatio-temporal geometry fusion for multiple hybrid cameras using moving least squares surfaces, COMPUTER GRAPHICS FORUM, Vol: 33, Pages: 1-10, ISSN: 0167-7055

Journal article

Kainz B, Voglreiter P, Sereinigg M, Wiederstein-Grasser I, Mayrhauser U, Kostenbauer S, Pollari M, Khlebnikov R, Seise M, Alhonnoro T, Hame Y, Seider D, Flanagan R, Bost C, Muhl J, O Neill D, Peng T, Payne S, Rueckert D, Schmalstieg D, Moche M, Kolesnik M, Stiegler P, Portugaller RHet al., 2014, High-resolution contrast enhanced multi-phase hepatic Computed Tomography data fromaporcine Radio-Frequency Ablation study, 11th International Symposium on Biomedical Imaging (ISBI), Publisher: IEEE, Pages: 81-84

Data below 1 mm voxel size is getting more and more common in the clinical practice but it is still hard to obtain a consistent collection of such datasets for medical image processing research. With this paper we provide a large collection of Contrast Enhanced (CE) Computed Tomography (CT) data from porcine animal experiments and describe their acquisition procedure and peculiarities. We have acquired three CE-CT phases at the highest available scanner resolution of 57 porcine livers during induced respiratory arrest. These phases capture contrast enhanced hepatic arteries, portal venous veins and hepatic veins. Therefore, we provide scan data that allows for a highly accurate reconstruction of hepatic vessel trees. Several datasets have been acquired during Radio-Frequency Ablation (RFA) experiments. Hence, many datasets show also artificially induced hepatic lesions, which can be used for the evaluation of structure detection methods.

Conference paper

Kainz B, Keraudren K, Kyriakopoulou V, Rutherford M, Hajnal JV, Rueckert Det al., 2014, Fast fully automatic brain detection in fetal MRI using dense rotation invariant image descriptors, 11th International Symposium on Biomedical Imaging (ISBI), Publisher: IEEE, Pages: 1230-1233

Automatic detection of the fetal brain in Magnetic Resonance (MR) Images is especially difficult due to arbitrary orientation of the fetus and possible movements during the scan. In this paper, we propose a method to facilitate fully automatic brain voxel classification by means of rotation invariant volume descriptors. We calculate features for a set of 50 prenatal fast spin echo T2 volumes of the uterus and learn the appearance of the fetal brain in the feature space. We evaluate our novel classification method and show that we can localize the fetal brain with an accuracy of 100% and classify fetal brain voxels with an accuracy above 97%. Furthermore, we show how the classification process can be used for a direct segmentation of the brain by simple refinement methods within the raw MR scan data leading to a final segmentation with a Dice score above 0.90.

Conference paper

Kainz B, Malamateniou C, Murgasova M, Keraudren K, Rutherford M, Hajnal J, Rueckert Det al., 2014, Motion Corrected 3D Reconstruction of the Fetal Thorax from Prenatal MRI, Publisher: Springer International Publishing, Pages: 284-291

Conference paper

Kainz B, Malamateniou C, Murgasova M, Keraudren K, Rutherford M, Hajnal JV, Rueckert Det al., 2014, Motion corrected 3D reconstruction of the fetal thorax from prenatal MRI., Med Image Comput Comput Assist Interv, Vol: 17, Pages: 284-291

In this paper we present a semi-automatic method for analysis of the fetal thorax in genuine three-dimensional volumes. After one initial click we localize the spine and accurately determine the volume of the fetal lung from high resolution volumetric images reconstructed from motion corrupted prenatal Magnetic Resonance Imaging (MRI). We compare the current state-of-the-art method of segmenting the lung in a slice-by-slice manner with the most recent multi-scan reconstruction methods. We use fast rotation invariant spherical harmonics image descriptors with Classification Forest ensemble learning methods to extract the spinal cord and show an efficient way to generate a segmentation prior for the fetal lung from this information for two different MRI field strengths. The spinal cord can be segmented with a DICE coefficient of 0.89 and the automatic lung segmentation has been evaluated with a DICE coefficient of 0.87. We evaluate our method on 29 fetuses with a gestational age (GA) between 20 and 38 weeks and show that our computed segmentations and the manual ground truth correlate well with the recorded values in literature.

Journal article

Khlebnikov R, Kainz B, Steinberger M, Schmalstieg Det al., 2013, Noise-based volume rendering for the visualization of multivariate volumetric data, IEEE Transactions on Visualization and Computer Graphics, Vol: 19, Pages: 2926-2935, ISSN: 1077-2626

Analysis of multivariate data is of great importance in many scientific disciplines. However, visualization of 3D spatially-fixed multivariate volumetric data is a very challenging task. In this paper we present a method that allows simultaneous real-time visualization of multivariate data. We redistribute the opacity within a voxel to improve the readability of the color defined by a regular transfer function, and to maintain the see-through capabilities of volume rendering. We use predictable procedural noise - random-phase Gabor noise - to generate a high-frequency redistribution pattern and construct an opacity mapping function, which allows to partition the available space among the displayed data attributes. This mapping function is appropriately filtered to avoid aliasing, while maintaining transparent regions. We show the usefulness of our approach on various data sets and with different example applications. Furthermore, we evaluate our method by comparing it to other visualization techniques in a controlled user study. Overall, the results of our study indicate that users are much more accurate in determining exact data values with our novel 3D volume visualization method. Significantly lower error rates for reading data values and high subjective ranking of our method imply that it has a high chance of being adopted for the purpose of visualization of multivariate 3D data.

Journal article

Voglreiter P, Steinberger M, Kainz B, Khlebnikov R, Schmalstieg Det al., 2013, Dynamic GPU Scheduling for Volume Rendering, IEEE Scientific Visualization 2013

Conference paper

Voglreiter P, Steinberger M, Khlebnikov R, Kainz B, Schmalstieg Det al., 2013, Volume Rendering with Advanced GPU Scheduling Strategies, IEEE VIS 2013, Publisher: IEEE

Modern GPUs are powerful enough to enable interactive display of high-quality volume data even despite the fact that many volume rendering methods do not present a natural fit for current GPU hardware. However, there still is a vast amount of computational power that remains unused due to the inefficient use of the available hardware. In this work, we demonstrate how advanced scheduling methods can be employed to implement volume rendering algorithms in a way that better utilizes the GPU by example of three different state-of-the-art volume rendering techniques.

Conference paper

Kerbl B, Voglreiter P, Khlebnikov R, Schmalstieg D, Seider D, Moche M, Stiegler P, Portugaller RH, Kainz Bet al., 2013, Intervention Planning of Hepatocellular Carcinoma Radio-Frequency Ablations, Clinical Image-Based Procedures. From Planning to Intervention, Publisher: Springer Berlin Heidelberg, Pages: 9-16

Book chapter

Kainz B, Hauswiesner S, Reitmayr G, Steinberger M, Grasset R, Gruber L, Veas E, Kalkofen D, Seichter H, Schmalstieg Det al., 2012, Omnikinect: real-time dense volumetric data acquisition and applications, 18th ACM symposium on Virtual reality software and technology, Publisher: ACM, Pages: 25-32

Real-time three-dimensional acquisition of real-world scenes has many important applications in computer graphics, computer vision and human-computer interaction. Inexpensive depth sensors such as the Microsoft Kinect allow to leverage the development of such applications. However, this technology is still relatively recent, and no detailed studies on its scalability to dense and view-independent acquisition have been reported. This paper addresses the question of what can be done with a larger number of Kinects used simultaneously. We describe an interference-reducing physical setup, a calibration procedure and an extension to the KinectFusion algorithm, which allows to produce high quality volumetric reconstructions from multiple Kinects whilst overcoming systematic errors in the depth measurements. We also report on enhancing image based visual hull rendering by depth measurements, and compare the results to KinectFusion. Our system provides practical insight into achievable spatial and radial range and into bandwidth requirements for depth data acquisition. Finally, we present a number of practical applications of our system.

Conference paper

Voglreiter P, Steinberger M, Schmalstieg D, Kainz Bet al., 2012, Volumetric real-time particle-based representation of large unstructured tetrahedral polygon meshes, MICCAI 2012 International Workshop, MeshMed 2012, Publisher: Springer Berlin Heidelberg, Pages: 159-168, ISSN: 0302-9743

In this paper we propose a particle-based volume rendering approach for unstructured, three-dimensional, tetrahedral polygon meshes. We stochastically generate millions of particles per second and project them on the screen in real-time. In contrast to previous rendering techniques of tetrahedral volume meshes, our method does not need a prior depth sorting of geometry. Instead, the rendered image is generated by choosing particles closest to the camera. Furthermore, we use spatial superimposing. Each pixel is constructed from multiple subpixels. This approach not only increases projection accuracy, but allows also a combination of subpixels into one superpixel that creates the well-known translucency effect of volume rendering. We show that our method is fast enough for the visualization of unstructured three-dimensional grids with hard real-time constraints and that it scales well for a high number of particles.

Conference paper

Khlebnikov R, Kainz B, Steinberger M, Streit M, Schmalstieg Det al., 2012, Procedural texture synthesis for zoom-independent visualization of multivariate data, Computer Graphics Forum, Vol: 31, Pages: 1355-1364, ISSN: 1467-8659

Simultaneous visualization of multiple continuous data attributes in a single visualization is a task that is important for many application areas. Unsurprisingly, many methods have been proposed to solve this task. However, the behavior of such methods during the exploration stage, when the user tries to understand the data with panning and zooming, has not been given much attention.In this paper, we propose a method that uses procedural texture synthesis to create zoom-independent visualizations of three scalar data attributes. The method is based on random-phase Gabor noise, whose frequency is adapted for the visualization of the first data attribute. We ensure that the resulting texture frequency lies in the range that is perceived well by the human visual system at any zoom level. To enhance the perception of this attribute, we also apply a specially constructed transfer function that is based on statistical properties of the noise. Additionally, the transfer function is constructed in a way that it does not introduce any aliasing to the texture. We map the second attribute to the texture orientation. The third attribute is color coded and combined with the texture by modifying the value component of the HSV color model. The necessary contrast needed for texture and color perception was determined in a user study. In addition, we conducted a second user study that shows significant advantages of our method over current methods with similar goals. We believe that our method is an important step towards creating methods that not only succeed in visualizing multiple data attributes, but also adapt to the behavior of the user during the data exploration stage.

Journal article

Steinberger M, Kenzel M, Kainz B, Schmalstieg Det al., 2012, ScatterAlloc: Massively parallel dynamic memory allocation for the GPU, Innovative Parallel Computing (InPar), Publisher: IEEE, Pages: 1-10

In this paper, we analyze the special requirements of a dynamic memory allocator that is designed for massively parallel architectures such as Graphics Processing Units (GPUs). We show that traditional strategies, which work well on CPUs, are not well suited for the use on GPUs and present the thorough design of ScatterAlloc, which can efficiently deal with hundreds of requests in parallel. Our allocator greatly reduces collisions and congestion by scattering memory requests based on hashing. We analyze ScatterAlloc in terms of allocation speed, data access time and fragmentation, and compare it to current state-of-the-art allocators, including the one provided with the NVIDIA CUDA toolkit. Our results show, that ScatterAlloc clearly outperforms these other approaches, yielding speed-ups between 10 to 100.

Conference paper

Steinberger M, Kainz B, Hauswiesner S, Khlebnikov R, Kalkofen D, Schmalstieg Det al., 2012, Ray prioritization using stylization and visual saliency, Computers & Graphics: an international journal of systems & applications in computer graphics, Vol: 36, Pages: 673-684, ISSN: 0097-8493

This paper presents a new method to control scene sampling in complex ray-based rendering environments. It proposes to constrain image sampling density with a combination of object features, which are known to be well perceived by the human visual system, and image space saliency, which captures effects that are not based on the object's geometry. The presented method uses Non-Photorealistic Rendering techniques for the object space feature evaluation and combines the image space saliency calculations with image warping to infer quality hints from previously generated frames. In order to map different feature types to sampling densities, we also present an evaluation of the object space and image space features' impact on the resulting image quality. In addition, we present an efficient, adaptively aligned fractal pattern that is used to reconstruct the image from sparse sampling data. Furthermore, this paper presents an algorithm which uses our method in order to guarantee a desired minimal frame rate. Our scheduling algorithm maximizes the utilization of each given time slice by rendering features in the order of visual importance values until a time constraint is reached. We demonstrate how our method can be used to boost or stabilize the rendering time in complex ray-based image generation consisting of geometric as well as volumetric data.

Journal article

Khlebnikov R, Kainz B, Steinberger M, Streit M, Schmalstieg Det al., 2012, Procedural texture synthesis for zoom-independent visualization of multivariate data, Computer Graphics Forum, Vol: 31, Pages: 1355-1364, ISSN: 0167-7055

Simultaneous visualization of multiple continuous data attributes in a single visualization is a task that is important for many application areas. Unsurprisingly, many methods have been proposed to solve this task. However, the behavior of such methods during the exploration stage, when the user tries to understand the data with panning and zooming, has not been given much attention. In this paper, we propose a method that uses procedural texture synthesis to create zoom-independent visualizations of three scalar data attributes. The method is based on random-phase Gabor noise, whose frequency is adapted for the visualization of the first data attribute. We ensure that the resulting texture frequency lies in the range that is perceived well by the human visual system at any zoom level. To enhance the perception of this attribute, we also apply a specially constructed transfer function that is based on statistical properties of the noise. Additionally, the transfer function is constructed in a way that it does not introduce any aliasing to the texture. We map the second attribute to the texture orientation. The third attribute is color coded and combined with the texture by modifying the value component of the HSV color model. The necessary contrast needed for texture and color perception was determined in a user study. In addition, we conducted a second user study that shows significant advantages of our method over current methods with similar goals. We believe that our method is an important step towards creating methods that not only succeed in visualizing multiple data attributes, but also adapt to the behavior of the user during the data exploration stage. © 2012 The Author(s).

Journal article

Steinberger M, Kainz B, Kerbl B, Hauswiesner S, Kenzel M, Schmalstieg Det al., 2012, Softshell: dynamic scheduling on GPUs, ACM Transactions on Graphics (TOG), Vol: 31, Pages: 161-161

Journal article

Mühl J, Köstenbauer S, Seise M, Kainz B, Stiegler P, Mayrhauser U, Portugaller Het al., 2011, Fusion von CT Volumen und histologischen Schnitten orientiert an natürlichen Merkmalspunkten, DE Patent 102,010,042,073

Patent

Koestenbauer S, Stiegler P, Stadlbauer V, Mayrhauser U, Leber B, Blattl D, Kainz B, Reich O, Portugaller RH, Wiederstein-Grasser I, otherset al., 2011, Visualization of large-scale sections, Journal of Surgical Radiology, Vol: April, Pages: 170-173, ISSN: 2156-4566

In this article we present a reliable protocol for the preparation of large-scale sectionsand an easy strategy for high quality documentation. Our investigation was driven by our personalgoal to fuse histology data and computerized tomography scans after radiofrequency ablation treatment.We achieved the fi rst step in this direction by optimizing a protocol for histology sections anddocumentation suitable for fusion into MicroCT. This technique could also be used for other organsystems.After radiofrequency ablation in pigs, the liver was fi xed in situ by perfusion with formalinto keep the organ in shape prior to excision. Liver was trimmed to the area of interest (50x50x30mm), fi xed and embedded in paraffi n. Steps of fi xation, dehydration and paraffi n embedding protocolswere carefully optimized. Then whole paraffi n blocks were scanned using a MicroCT. Nextlarge-scale serial sections were performed and stained. Sections were scanned in high quality usinga commercially available scanner. Further details are available on our project homepage (www.imppact.eu,“Image analysis”).

Journal article

Rostislav Khlebnikov JM, Schmalstieg D, 2011, GPU based on-the-fly light emission-absorption approximation for direct multi-volume rendering, Pages: 11-12

Conference paper

Kainz B, 2011, Ray-Based Image Generation For Advanced Medical Applications

Thesis dissertation

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00646162&limit=30&person=true&page=6&respub-action=search.html