Imperial College London

DrStamatiaGiannarou

Faculty of MedicineDepartment of Surgery & Cancer

Lecturer in Surgical Cancer Technology and Imaging
 
 
 
//

Contact

 

+44 (0)20 7594 3492stamatia.giannarou Website

 
 
//

Location

 

413Bessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

66 results found

Cartucho J, Wang C, Huang B, Elson DS, Darzi A, Giannarou Set al., 2021, An enhanced marker pattern that achieves improved accuracy in surgical tool tracking, COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, ISSN: 2168-1163

Journal article

Tukra S, Giannarou S, 2021, Randomly connected neural networks for self-supervised monocular depth estimation, COMPUTER METHODS IN BIOMECHANICS AND BIOMEDICAL ENGINEERING-IMAGING AND VISUALIZATION, ISSN: 2168-1163

Journal article

Maier-Hein L, Eisenmann M, Sarikaya D, März K, Collins T, Malpani A, Fallert J, Feussner H, Giannarou S, Mascagni P, Nakawala H, Park A, Pugh C, Stoyanov D, Vedula SS, Cleary K, Fichtinger G, Forestier G, Gibaud B, Grantcharov T, Hashizume M, Heckmann-Nötzel D, Kenngott HG, Kikinis R, Mündermann L, Navab N, Onogur S, Roß T, Sznitman R, Taylor RH, Tizabi MD, Wagner M, Hager GD, Neumuth T, Padoy N, Collins J, Gockel I, Goedeke J, Hashimoto DA, Joyeux L, Lam K, Leff DR, Madani A, Marcus HJ, Meireles O, Seitel A, Teber D, Ückert F, Müller-Stich BP, Jannin P, Speidel Set al., 2021, Surgical Data Science - from Concepts toward Clinical Translation, Medical Image Analysis, Pages: 102306-102306, ISSN: 1361-8415

Journal article

Davids J, Makariou S-G, Ashrafian H, Darzi A, Marcus HJ, Giannarou Set al., 2021, Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation, WORLD NEUROSURGERY, Vol: 149, Pages: E669-E686, ISSN: 1878-8750

Journal article

Berthet-Rayne P, Sadati S, Petrou G, Patel N, Giannarou S, Leff DR, Bergeles Cet al., 2021, MAMMOBOT: A Miniature Steerable Soft Growing Robot for Early Breast Cancer Detection, IEEE Robotics and Automation Letters, Pages: 1-1

Journal article

Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov Det al., 2021, Ethical implications of AI in robotic surgical training: A Delphi consensus statement, European Urology Focus, ISSN: 2405-4569

ContextAs the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.ObjectivesTo provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee.Evidence acquisitionThe project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement.Evidence synthesisThere was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI.ConclusionsUsing the Delphi methodology, we achieved international consensus among experts to develop and reach

Journal article

Tukra S, Marcus HJ, Giannarou S, 2021, See-Through Vision with Unsupervised Scene Occlusion Reconstruction., IEEE Trans Pattern Anal Mach Intell, Vol: PP

Among the greatest of the challenges of Minimally Invasive Surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines perceptual, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.

Journal article

Bautista-Salinas D, Kundrat D, Kogkas A, Abdelaziz MEMK, Giannarou S, Rodriguez Y Baena Fet al., 2021, Integrated Augmented Reality Feedback for Cochlear Implant Surgery Instruments, IEEE Transactions on Medical Robotics and Bionics, Vol: 3, Pages: 261-264

In this article, we present a visualization system to provide assistance in cochlear implant surgery which can be seamlessly integrated within the devices that are currently used in surgery. The system is intended to improve tool alignment in positioning and during insertion, with the aim of reducing the problems encountered during perimodiolar electrode array insertion. Our system is composed of a semi-autonomous hand-held surgical tool, coupled with an optical tracker to monitor the tool position and an operating microscope. The microscope live view is overlaid with guidance information in the form of augmented reality to assist the surgeon in positioning the surgical tool and maintain that position during insertion. Our approach shows promising results in tool alignment, which are comparable to the state of the art.

Journal article

Huang B, Zheng J-Q, Nguyen A, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2021, Self-supervised Generative Adversarial Network for Depth Estimation in Laparoscopic Images, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 227-237, ISSN: 0302-9743

Conference paper

Cartucho J, Tukra S, Li Y, S Elson D, Giannarou Set al., 2021, VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Vol: 9, Pages: 331-338, ISSN: 2168-1163

Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. The main problem, however, is that the training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets which are challenging to obtain, since they require expensive hardware, ethical approvals, patient consent and access to hospitals. This paper presents VisionBlender, a solution to efficiently generate large and accurate endoscopic datasets for validating surgical vision algorithms. VisionBlender is a synthetic dataset generator that adds a user interface to Blender, allowing users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters. VisionBlender was built with special focus on robotic surgery, and examples of endoscopic data that can be generated using this tool are presented. Possible applications are also discussed, and here we present one of those applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms. Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.

Journal article

Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJet al., 2020, Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance., Neurosurgical Review, Vol: 44, Pages: 1853-1867, ISSN: 0344-5607

At a time of significant global unrest and uncertainty surrounding how the delivery of clinical training will unfold over the coming years, we offer a systematic review, meta-analysis, and bibliometric analysis of global studies showing the crucial role simulation will play in training. Our aim was to determine the types of simulators in use, their effectiveness in improving clinical skills, and whether we have reached a point of global acceptance. A PRISMA-guided global systematic review of the neurosurgical simulators available, a meta-analysis of their effectiveness, and an extended analysis of their progressive scholarly acceptance on studies meeting our inclusion criteria of simulation in neurosurgical education were performed. Improvement in procedural knowledge and technical skills was evaluated. Of the identified 7405 studies, 56 studies met the inclusion criteria, collectively reporting 50 simulator types ranging from cadaveric, low-fidelity, and part-task to virtual reality (VR) simulators. In all, 32 studies were included in the meta-analysis, including 7 randomised controlled trials. A random effects, ratio of means effects measure quantified statistically significant improvement in procedural knowledge by 50.2% (ES 0.502; CI 0.355; 0.649, p < 0.001), technical skill including accuracy by 32.5% (ES 0.325; CI - 0.482; - 0.167, p < 0.001), and speed by 25% (ES - 0.25, CI - 0.399; - 0.107, p < 0.001). The initial number of VR studies (n = 91) was approximately double the number of refining studies (n = 45) indicating it is yet to reach progressive scholarly acceptance. There is strong evidence for a beneficial impact of adopting simulation in the improvement of procedural knowledge and technical skill. We show a growing trend towards the adoption of neurosurgical simulators, although we have not fully gained progressive scholarly acceptance for VR-based simulation technologies in neurosurgical education.

Journal article

Zhan J, Cartucho J, Giannarou S, 2020, Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation, 2020 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 11147-11154

In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of motion is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or translating with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue motion. The 3D structure of the surgical scene is recovered, and a feature-based method is proposed to estimate the motion of the tissue in real-time. The desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form motion. We deployed this framework on the da Vinci ® surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Our framework can be easily extended to other probe-based imaging modalities.

Conference paper

Huang B, Tsai Y-Y, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2020, Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 1389-1397, ISSN: 1861-6410

PurposeIn surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed.MethodsA dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons.ResultsThe method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is 0.05∘and 0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are 360∘,360∘ and 8∘–82∘∪188∘–352∘.ConclusionThe performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hyb

Journal article

Giannarou S, Hacihaliloglu I, 2020, IJCARS - IPCAI 2020 special issue: 11th conference on information processing for computer-assisted interventions - part 1, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 737-738, ISSN: 1861-6410

Journal article

Cartucho J, Shapira D, Ashrafian H, Giannarou Set al., 2020, Multimodal mixed reality visualisation for intraoperative surgical guidance, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 819-826, ISSN: 1861-6410

PurposeIn the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance.MethodologyIn this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects’ transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions.ResultsThe analysis of the surgeons’ scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery.ConclusionThe presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons’ suggestions and conducting extensive evaluation on a large group of surgeons.

Journal article

Cartucho J, Tukra S, Li Y, Elson D, Giannarou Set al., 2020, VisionBlender: A Tool for Generating Computer Vision Datasets in Robotic Surgery (best paper award), Joint MICCAI 2020 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0)

Conference paper

Huang B, Tsai Y-Y, Cartucho J, Tuch D, Giannarou S, Elson Det al., 2020, Tracking and Visualization of the Sensing Area for a Tethered Laparoscopic Gamma Probe, Information Processing in Computer Assisted Intervention (IPCAI)

Conference paper

Li Y, Charalampaki P, Liu Y, Yang G-Z, Giannarou Set al., 2018, Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data, International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 1187-1199, ISSN: 1861-6410

Purpose: Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures.Methods: The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods.Results: We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%.Conclusions: This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. Th

Journal article

Triantafyllou P, Wisanuvej P, Giannarou S, Liu J, Yang G-Zet al., 2018, A Framework for Sensorless Tissue Motion Tracking in Robotic Endomicroscopy Scanning, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 2694-2699, ISSN: 1050-4729

Conference paper

Shen M, Giannarou S, Shah PL, Yang GZet al., 2017, Branch: Bifurcation recognition for airway navigation based on structural characteristics, MICCAI 2017, Publisher: Springer, Pages: 182-189, ISSN: 0302-9743

Bronchoscopic navigation is challenging, especially at the level of peripheral airways due to the complicated bronchial structures and the large respiratory motion. The aim of this paper is to propose a localisation approach tailored for navigation in the distal airway branches. Salient regions are detected on the depth maps of video images and CT virtual projections to extract anatomically meaningful areas that represent airway bifurcations. An airway descriptor based on shape context is introduced which encodes both the structural characteristics of the bifurcations and their spatial distribution. The bronchoscopic camera is localised in the airways by minimising the cost of matching the region features in video images to the pre-computed CT depth maps considering both the shape and temporal information. The method has been validated on phantom and in vivo data and the results verify its robustness to tissue deformation and good performance in distal airways.

Conference paper

Zhang L, Ye M, Giannarou S, Pratt P, Yang GZet al., 2017, Motion-compensated autonomous scanning for tumour localisation using intraoperative ultrasound, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 10434, Pages: 619-627, ISSN: 0302-9743

Intraoperative ultrasound facilitates localisation of tumour boundaries during minimally invasive procedures. Autonomous ultrasound scanning systems have been recently proposed to improve scanning accuracy and reduce surgeons’ cognitive load. However, current methods mainly consider static scanning environments typically with the probe pressing against the tissue surface. In this work, a motion-compensated autonomous ultrasound scanning system using the da Vinci® Research Kit (dVRK) is proposed. An optimal scanning trajectory is generated considering both the tissue surface shape and the ultrasound transducer dimensions. An effective vision-based approach is proposed to learn the underlying tissue motion characteristics. The learned motion model is then incorporated into the visual servoing framework. The proposed system has been validated with both phantom and ex vivo experiments.

Journal article

Maier-Hein L, Vedula SS, Speidel S, Navab N, Kikinis R, Park A, Eisenmann M, Feussner H, Forestier G, Giannarou S, Hashizume M, Katic D, Kenngott H, Kranzfelder M, Malpani A, Maerz K, Neumuth T, Padoy N, Pugh C, Schoch N, Stoyanov D, Taylor R, Wagner M, Hager GD, Jannin Pet al., 2017, Surgical data science for next-generation interventions, Nature Biomedical Engineering, Vol: 1, Pages: 691-696, ISSN: 2157-846X

Interventional healthcare will evolve from an artisanal craft based on the individual experiences, preferences and traditions of physicians into a discipline that relies on objective decision-making on the basis of large-scale data from heterogeneous sources.

Journal article

Zhao L, Giannarou S, Lee S, Yang GZet al., 2016, Registration-free simultaneous catheter and environment modelling, Medical Image Computing and Computer Assisted Intervention (MICCAI) 2016, Publisher: Springer

Endovascular procedures are challenging to perform due tothe complexity and difficulty in catheter manipulation. The simultaneousrecovery of the 3D structure of the vasculature and the catheter posi-tion and orientation intra-operatively is necessary in catheter controland navigation. State-of-art Simultaneous Catheter and EnvironmentModelling provides robust and real-time 3D vessel reconstruction based on real-time intravascular ultrasound (IVUS) imaging and electromagnetic (EM) sensing, but still relies on accurate registration between EM and pre-operative data. In this paper, a registration-free vessel reconstruction method is proposed for endovascular navigation. In the optimisation framework, the EM-CT registration is estimated and updated intra-operatively together with the 3D vessel reconstruction from IVUS, EM and pre-operative data, and thus does not require explicit registration. The proposed algorithm can also deal with global (patient) motion and periodic deformation caused by cardiac motion. Phantom and in-vivo experiments validate the accuracy of the algorithm and the resultsdemonstrate the potential clinical value of the technique.

Conference paper

Ye M, Zhang L, Giannarou S, Yang G-Zet al., 2016, Real-Time 3D Tracking of Articulated Tools for Robotic Surgery, International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, Publisher: Springer, Pages: 386-394, ISSN: 0302-9743

In robotic surgery, tool tracking is important for providingsafe tool-tissue interaction and facilitating surgical skills assessment. De-spite recent advances in tool tracking, existing approaches are faced withmajor difficulties in real-time tracking of articulated tools. Most algo-rithms are tailored for offline processing with pre-recordedvideos. In thispaper, we propose a real-time 3D tracking method for articulated toolsin robotic surgery. The proposed method is based on the CAD modelof the tools as well as robot kinematics to generate online part-basedtemplates for efficient 2D matching and 3D pose estimation. A robustverification approach is incorporated to reject outliers in2D detections,which is then followed by fusing inliers with robot kinematic readingsfor 3D pose estimation of the tool. The proposed method has been val-idated with phantom data, as well asex vivoandin vivoexperiments.The results derived clearly demonstrate the performance advantage ofthe proposed method when compared to the state-of-the-art.

Conference paper

Zhao L, Giannarou S, Lee S, Yang GZet al., 2016, SCEM+: real-time robust simultaneous catheter and environment modeling for endovascular navigation, IEEE Robotics and Automation Letters, Vol: 1, Pages: 961-968, ISSN: 2377-3766

Endovascular procedures are characterised by significant challenges mainly due to the complexity in catheter control and navigation. Real-time recovery of the 3-D structure of the vasculature is necessary to visualise the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. State-of-the-art intraoperative vessel reconstruction approaches are increasingly relying on nonionising imaging techniques such as optical coherence tomography (OCT) and intravascular ultrasound (IVUS). To enable accurate recovery of vessel structures and to deal with sensing errors and abrupt catheter motions, this letter presents a robust and real-time vessel reconstruction scheme for endovascular navigation based on IVUS and electromagnetic (EM) tracking. It is formulated as a nonlinear optimisation problem, which considers the uncertainty in both the IVUS contour and the EM pose, as well as vessel morphology provided by preoperative data. Detailed phantom validation is performed and the results demonstrate the potential clinical value of the technique.

Journal article

Zhao L, Giannarou S, Lee S, Merrifield R, Yang GZet al., 2016, Intra-operative simultaneous catheter and environment modelling for endovascular navigation based on intravascular ultrasound, electromagnetic tracking and pre-operative data, The Hamlyn Symposium on Medical Robotics, Publisher: The Hamlyn Symposium on Medical Robotics, Pages: 76-77

Conference paper

Giannarou S, Ye M, Gras G, Leibrandt K, Marcus HJ, Yang GZet al., 2016, Vision-based deformation recovery for intraoperative force estimation of tool–tissue interaction for neurosurgery, International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 929-936, ISSN: 1861-6410

Purpose: In microsurgery, accurate recovery of the deformation of the surgical environment is important for mitigating the risk of inadvertent tissue damage and avoiding instrument maneuvers that may cause injury. The analysis of intraoperative microscopic data can allow the estimation of tissue deformation and provide to the surgeon useful feedback on the instrument forces exerted on the tissue. In practice, vision-based recovery of tissue deformation during tool–tissue interaction can be challenging due to tissue elasticity and unpredictable motion.Methods: The aim of this work is to propose an approach for deformation recovery based on quasi-dense 3D stereo reconstruction. The proposed framework incorporates a new stereo correspondence method for estimating the underlying 3D structure. Probabilistic tracking and surface mapping are used to estimate 3D point correspondences across time and recover localized tissue deformations in the surgical site.Results: We demonstrate the application of this method to estimating forces exerted on tissue surfaces. A clinically relevant experimental setup was used to validate the proposed framework on phantom data. The quantitative and qualitative performance evaluation results show that the proposed 3D stereo reconstruction and deformation recovery methods achieve submillimeter accuracy. The force–displacement model also provides accurate estimates of the exerted forces.Conclusions: A novel approach for tissue deformation recovery has been proposed based on reliable quasi-dense stereo correspondences. The proposed framework does not rely on additional equipment, allowing seamless integration with the existing surgical workflow. The performance evaluation analysis shows the potential clinical value of the technique.

Journal article

Ye M, Giannarou S, Meining A, Yang G-Zet al., 2016, Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations, Medical Image Analysis, Vol: 30, Pages: 144-157, ISSN: 1361-8415

With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art.

Journal article

Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, Poorten EVet al., 2016, Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions (vol 11, pg 553, 2016), International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 847-847, ISSN: 1861-6410

Journal article

Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, Vander Poorten Eet al., 2015, Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions, International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 553-568, ISSN: 1861-6410

PurposeAdvances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room.MethodsThe review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive.ResultsStudies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices.ConclusionML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the st

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00396337&limit=30&person=true