Imperial College London

DrStamatiaGiannarou

Faculty of MedicineDepartment of Surgery & Cancer

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 3492stamatia.giannarou Website

 
 
//

Location

 

413Bessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

80 results found

Xu H, Runciman M, Cartucho J, Xu C, Giannarou Set al., 2023, Graph-based pose estimation of texture-less surgical tools for autonomous robot control, 2023 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 2731-2737

In Robot-assisted Minimally Invasive Surgery (RMIS), the estimation of the pose of surgical tools is crucial for applications such as surgical navigation, visual servoing, autonomous robotic task execution and augmented reality. A plethora of hardware-based and vision-based methods have been proposed in the literature. However, direct application of these methods to RMIS has significant limitations due to partial tool visibility, occlusions and changes in the surgical scene. In this work, a novel keypoint-graph-based network is proposed to estimate the pose of texture-less cylindrical surgical tools of small diameter. To deal with the challenges in RMIS, keypoint object representation is used and for the first time, temporal information is combined with spatial information in keypoint graph representation, for keypoint refinement. Finally, stable and accurate tool pose is computed using a PnP solver. Our performance evaluation study has shown that the proposed method is able to accurately predict the pose of a textureless robotic shaft with an ADD-S score of over 98%. The method outperforms state-of-the-art pose estimation models under challenging conditions such as object occlusion and changes in the lighting of the scene.

Conference paper

Weld A, Agrawal A, Giannarou S, 2023, Ultrasound segmentation using a 2D UNet with Bayesian volumetric support, MICCAI 2022 25th International Conference on Medical Image Computing and Computer Assisted Intervention, Publisher: Springer Nature Switzerland, Pages: 63-68, ISSN: 0302-9743

We present a novel 2D segmentation neural network design for the segmentation of tumour tissue in intraoperative ultrasound (iUS). Due to issues with brain shift and tissue deformation, pre-operative imaging for tumour resection has limited reliability within the operating room (OR). iUS serves as a tool for improving tumour localisation and boundary delineation. Our proposed method takes inspiration from Bayesian networks. Rather than using a conventional 3D UNet, we develop a technique which samples from the volume around the query slice, and perform multiple segmentation’s which provides volumetric support to improve the accuracy of the segmentation of the query slice. Our results show that our proposed architecture achieves an 0.04 increase in the validation dice score compared to the benchmark network.

Conference paper

Weld A, Cartucho J, Xu C, Davids J, Giannarou Set al., 2022, Regularising disparity estimation via multi task learning with structured light reconstruction, COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, ISSN: 2168-1163

Journal article

Huang B, Zheng J-Q, Nguyen A, Xu C, Gkouzionis I, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2022, Self-supervised depth estimation in laparoscopic image using 3D geometric consistency, 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 13-22, ISSN: 0302-9743

Depth estimation is a crucial step for image-guided intervention in robotic surgery and laparoscopic imaging system. Since per-pixel depth ground truth is difficult to acquire for laparoscopic image data, it is rarely possible to apply supervised depth estimation to surgical applications. As an alternative, self-supervised methods have been introduced to train depth estimators using only synchronized stereo image pairs. However, most recent work focused on the left-right consistency in 2D and ignored valuable inherent 3D information on the object in real world coordinates, meaning that the left-right 3D geometric structural consistency is not fully utilized. To overcome this limitation, we present M3Depth, a self-supervised depth estimator to leverage 3D geometric structural information hidden in stereo pairs while keeping monocular inference. The method also removes the influence of border regions unseen in at least one of the stereo images via masking, to enhance the correspondences between left and right images in overlapping areas. Extensive experiments show that our method outperforms previous self-supervised approaches on both a public dataset and a newly acquired dataset by a large margin, indicating a good generalization across different samples and laparoscopes.

Conference paper

DeLorey C, Davids JD, Cartucho J, Xu C, Roddan A, Nimer A, Ashrafian H, Darzi A, Thompson AJ, Akhond S, Runciman M, Mylonas G, Giannarou S, Avery Jet al., 2022, A cable‐driven soft robotic end‐effector actuator for probe‐based confocal laser endomicroscopy: Development and preclinical validation, Translational Biophotonics, ISSN: 2627-1850

Journal article

Huang B, Zheng J-Q, Giannarou S, Elson DSet al., 2022, H-Net: unsupervised attention-based stereo depth estimation leveraging epipolar geometry, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4459-4466

Depth estimation from a stereo image pair has become one of the most explored applications in computer vision, with most previous methods relying on fully supervised learning settings. However, due to the difficulty in acquiring accurate and scalable ground truth data, the training of fully supervised methods is challenging. As an alternative, self-supervised methods are becoming more popular to mitigate this challenge. In this paper, we introduce the H-Net, a deep-learning framework for unsupervised stereo depth estimation that leverages epipolar geometry to refine stereo matching. For the first time, a Siamese autoencoder architecture is used for depth estimation which allows mutual information between rectified stereo images to be extracted. To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features that lie on the same epipolar line while learning mutual information between the input stereo pair. Stereo correspondences are further enhanced by incorporating semantic information to the proposed attention mechanism. More specifically, the optimal transport algorithm is used to suppress attention and eliminate outliers in areas not visible in both cameras. Extensive experiments on KITTI2015 and Cityscapes show that the proposed modules are able to improve the performance of the unsupervised stereo depth estimation methods while closing the gap with the fully supervised approaches.

Conference paper

Huang B, Nguyen A, Wang S, Wang Z, Mayer E, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2022, Simultaneous depth estimation and surgical tool segmentation in laparoscopic images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 335-338, ISSN: 2576-3202

Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.

Journal article

Xu C, Roddan A, Davids J, Weld A, Xu H, Giannarou Set al., 2022, Deep Regression with Spatial-Frequency Feature Coupling and Image Synthesis for Robot-Assisted Endomicroscopy, 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 157-166, ISSN: 0302-9743

Conference paper

Huang B, Tuch D, Vyas K, Giannarou S, Elson Det al., 2022, Self-supervised monocular laparoscopic images depth estimation leveraging interactive closest point in 3D to enable image-guided radioguided surgery, European Molecular Imaging Meeting

Conference paper

Tukra S, Giannarou S, 2022, Stereo Depth Estimation via Self-supervised Contrastive Representation Learning, 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 604-614, ISSN: 0302-9743

Conference paper

Wang C, Cartucho J, Elson D, Darzi A, Giannarou Set al., 2022, Towards Autonomous Control of Surgical Instruments using Adaptive-Fusion Tracking and Robot Self-Calibration, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 2395-2401, ISSN: 2153-0858

Conference paper

Maier-Hein L, Eisenmann M, Sarikaya D, Maerz K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Noetzel D, Kenngott HG, Kikinis R, Muendermann L, Navab N, Onogur S, Ross T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ueckert F, Mueller-Stich BP, Jannin P, Speidel Set al., 2021, Surgical data science-from concepts toward clinical translation, MEDICAL IMAGE ANALYSIS, Vol: 76, ISSN: 1361-8415

Journal article

Tukra S, Giannarou S, 2021, Randomly connected neural networks for self-supervised monocular depth estimation, COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, Vol: 10, Pages: 390-399, ISSN: 2168-1163

Journal article

Cartucho J, Wang C, Huang B, Elson DS, Darzi A, Giannarou Set al., 2021, An enhanced marker pattern that achieves improved accuracy in surgical tool tracking, Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, Vol: 10, Pages: 1-9, ISSN: 2168-1163

In computer assisted interventions (CAI), surgical tool tracking is crucial for applications such as surgical navigation, surgical skill assessment, visual servoing, and augmented reality. Tracking of cylindrical surgical tools can be achieved by printing and attaching a marker to their shaft. However, the tracking error of existing cylindrical markers is still in the millimetre range, which is too large for applications such as neurosurgery requiring sub-millimetre accuracy. To achieve tool tracking with sub-millimetre accuracy, we designed an enhanced marker pattern, which is captured on images from a monocular laparoscopic camera. The images are used as input for a tracking method which is described in this paper. Our tracking method was compared to the state-of-the-art, on simulation and ex vivo experiments. This comparison shows that our method outperforms the current state-of-the-art. Our marker achieves a mean absolute error of 0.28 [mm] and 0.45 [°] on ex vivo data, and 0.47 [mm] and 1.46 [°] on simulation. Our tracking method is real-time and runs at 55 frames per second for 720×576 image resolution.

Journal article

Huang B, Zheng J-Q, Nguyen A, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2021, Self-supervised generative adverrsarial network for depth estimation in laparoscopic images, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer, Pages: 227-237

Dense depth estimation and 3D reconstruction of a surgical scene are crucial steps in computer assisted surgery. Recent work has shown that depth estimation from a stereo image pair could be solved with convolutional neural networks. However, most recent depth estimation models were trained on datasets with per-pixel ground truth. Such data is especially rare for laparoscopic imaging, making it hard to apply supervised depth estimation to real surgical applications. To overcome this limitation, we propose SADepth, a new self-supervised depth estimation method based on Generative Adversarial Networks. It consists of an encoder-decoder generator and a discriminator to incorporate geometry constraints during training. Multi-scale outputs from the generator help to solve the local minima caused by the photometric reprojection loss, while the adversarial learning improves the framework generation quality. Extensive experiments on two public datasets show that SADepth outperforms recent state-of-the-art unsupervised methods by a large margin, and reduces the gap between supervised and unsupervised depth estimation in laparoscopic images.

Conference paper

Berthet-Rayne P, Sadati S, Petrou G, Patel N, Giannarou S, Leff DR, Bergeles Cet al., 2021, MAMMOBOT: A Miniature Steerable Soft Growing Robot for Early Breast Cancer Detection, IEEE Robotics and Automation Letters, Pages: 1-1

Journal article

Davids J, Makariou S-G, Ashrafian H, Darzi A, Marcus HJ, Giannarou Set al., 2021, Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation, WORLD NEUROSURGERY, Vol: 149, Pages: E669-E686, ISSN: 1878-8750

Journal article

Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov Det al., 2021, Ethical implications of AI in robotic surgical training: A Delphi consensus statement, European Urology Focus, ISSN: 2405-4569

ContextAs the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.ObjectivesTo provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee.Evidence acquisitionThe project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement.Evidence synthesisThere was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI.ConclusionsUsing the Delphi methodology, we achieved international consensus among experts to develop and reach

Journal article

Tukra S, Marcus HJ, Giannarou S, 2021, See-Through Vision with Unsupervised Scene Occlusion Reconstruction., IEEE Trans Pattern Anal Mach Intell, Vol: PP

Among the greatest of the challenges of Minimally Invasive Surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines perceptual, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.

Journal article

Bautista-Salinas D, Kundrat D, Kogkas A, Abdelaziz MEMK, Giannarou S, Baena FRYet al., 2021, Integrated Augmented Reality Feedback for Cochlear Implant Surgery Instruments, IEEE TRANSACTIONS ON MEDICAL ROBOTICS AND BIONICS, Vol: 3, Pages: 261-264

Journal article

Vieira Cartucho J, Wang C, Huang B, Elson D, Darzi A, Giannarou Set al., 2021, An Enhanced Marker Pattern that Achieves Improved Accuracy in Surgical Tool Tracking, Joint MICCAI 2021 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0), Publisher: Taylor and Francis, ISSN: 2168-1163

Conference paper

Cartucho J, Tukra S, Li Y, S Elson D, Giannarou Set al., 2021, VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Vol: 9, Pages: 331-338, ISSN: 2168-1163

Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. The main problem, however, is that the training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets which are challenging to obtain, since they require expensive hardware, ethical approvals, patient consent and access to hospitals. This paper presents VisionBlender, a solution to efficiently generate large and accurate endoscopic datasets for validating surgical vision algorithms. VisionBlender is a synthetic dataset generator that adds a user interface to Blender, allowing users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters. VisionBlender was built with special focus on robotic surgery, and examples of endoscopic data that can be generated using this tool are presented. Possible applications are also discussed, and here we present one of those applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms. Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.

Journal article

Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJet al., 2020, Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance., Neurosurgical Review, Vol: 44, Pages: 1853-1867, ISSN: 0344-5607

At a time of significant global unrest and uncertainty surrounding how the delivery of clinical training will unfold over the coming years, we offer a systematic review, meta-analysis, and bibliometric analysis of global studies showing the crucial role simulation will play in training. Our aim was to determine the types of simulators in use, their effectiveness in improving clinical skills, and whether we have reached a point of global acceptance. A PRISMA-guided global systematic review of the neurosurgical simulators available, a meta-analysis of their effectiveness, and an extended analysis of their progressive scholarly acceptance on studies meeting our inclusion criteria of simulation in neurosurgical education were performed. Improvement in procedural knowledge and technical skills was evaluated. Of the identified 7405 studies, 56 studies met the inclusion criteria, collectively reporting 50 simulator types ranging from cadaveric, low-fidelity, and part-task to virtual reality (VR) simulators. In all, 32 studies were included in the meta-analysis, including 7 randomised controlled trials. A random effects, ratio of means effects measure quantified statistically significant improvement in procedural knowledge by 50.2% (ES 0.502; CI 0.355; 0.649, p < 0.001), technical skill including accuracy by 32.5% (ES 0.325; CI - 0.482; - 0.167, p < 0.001), and speed by 25% (ES - 0.25, CI - 0.399; - 0.107, p < 0.001). The initial number of VR studies (n = 91) was approximately double the number of refining studies (n = 45) indicating it is yet to reach progressive scholarly acceptance. There is strong evidence for a beneficial impact of adopting simulation in the improvement of procedural knowledge and technical skill. We show a growing trend towards the adoption of neurosurgical simulators, although we have not fully gained progressive scholarly acceptance for VR-based simulation technologies in neurosurgical education.

Journal article

Zhan J, Cartucho J, Giannarou S, 2020, Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation, 2020 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 11147-11154

In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of motion is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or translating with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue motion. The 3D structure of the surgical scene is recovered, and a feature-based method is proposed to estimate the motion of the tissue in real-time. The desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form motion. We deployed this framework on the da Vinci ® surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Our framework can be easily extended to other probe-based imaging modalities.

Conference paper

Huang B, Tsai Y-Y, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2020, Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 1389-1397, ISSN: 1861-6410

PurposeIn surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed.MethodsA dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons.ResultsThe method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is 0.05∘and 0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are 360∘,360∘ and 8∘–82∘∪188∘–352∘.ConclusionThe performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hyb

Journal article

Giannarou S, Hacihaliloglu I, 2020, IJCARS - IPCAI 2020 special issue: 11th conference on information processing for computer-assisted interventions - part 1, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 737-738, ISSN: 1861-6410

Journal article

Cartucho J, Shapira D, Ashrafian H, Giannarou Set al., 2020, Multimodal mixed reality visualisation for intraoperative surgical guidance, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 819-826, ISSN: 1861-6410

PurposeIn the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance.MethodologyIn this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects’ transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions.ResultsThe analysis of the surgeons’ scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery.ConclusionThe presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons’ suggestions and conducting extensive evaluation on a large group of surgeons.

Journal article

Zhao L, Giannarou S, Lee SL, Yang GZet al., 2020, Real-Time Robust Simultaneous Catheter and Environment Modeling for Endovascular Navigation, Intravascular Ultrasound: From Acquisition to Advanced Quantitative Analysis, Pages: 185-197, ISBN: 9780128188330

Due to the complexity in catheter control and navigation, endovascular procedures are characterized by significant challenges. Real-time recovery of the 3D structure of the vasculature intraoperatively is necessary to visualize the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. Nonionizing imaging techniques such as intravascular ultrasound (IVUS) are increasingly used in vessel reconstruction approaches. To enable accurate recovery of vessel structures, this chapter presents a robust and real-time simultaneous catheter and environment modeling method for endovascular navigation based on IVUS imaging, electromagnetic (EM) sensing as well as the vessel structure information obtained from the preoperative CT/MR imaging. By considering the uncertainty in both the IVUS contour and the EM pose in the proposed nonlinear optimization problem, the proposed algorithm can provide accurate vessel reconstruction, at the same time deal with sensing errors and abrupt catheter motions. Experimental results using two different phantoms, with different catheter motions demonstrated the accuracy of the vessel reconstruction and the potential clinical value of the proposed vessel reconstruction method.

Book chapter

Huang B, Tsai Y-Y, Cartucho J, Tuch D, Giannarou S, Elson Det al., 2020, Tracking and Visualization of the Sensing Area for a Tethered Laparoscopic Gamma Probe, Information Processing in Computer Assisted Intervention (IPCAI)

Conference paper

Cartucho J, Tukra S, Li Y, Elson D, Giannarou Set al., 2020, VisionBlender: A Tool for Generating Computer Vision Datasets in Robotic Surgery (best paper award), Joint MICCAI 2020 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0)

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00396337&limit=30&person=true