Imperial College London

DrStamatiaGiannarou

Faculty of MedicineDepartment of Surgery & Cancer

Senior Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 3492stamatia.giannarou Website

 
 
//

Location

 

413Bessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

87 results found

Zhan J, Cartucho J, Giannarou S, 2020, Autonomous tissue scanning under free-form motion for intraoperative tissue characterisation, 2020 IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 11147-11154

In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of motion is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or translating with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue motion. The 3D structure of the surgical scene is recovered, and a feature-based method is proposed to estimate the motion of the tissue in real-time. The desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form motion. We deployed this framework on the da Vinci ® surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Our framework can be easily extended to other probe-based imaging modalities.

Conference paper

Huang B, Tsai Y-Y, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2020, Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 1389-1397, ISSN: 1861-6410

PurposeIn surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed.MethodsA dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons.ResultsThe method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is 0.05∘and 0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are 360∘,360∘ and 8∘–82∘∪188∘–352∘.ConclusionThe performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hyb

Journal article

Giannarou S, Hacihaliloglu I, 2020, IJCARS - IPCAI 2020 special issue: 11th conference on information processing for computer-assisted interventions - part 1, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 737-738, ISSN: 1861-6410

Journal article

Cartucho J, Shapira D, Ashrafian H, Giannarou Set al., 2020, Multimodal mixed reality visualisation for intraoperative surgical guidance, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 819-826, ISSN: 1861-6410

PurposeIn the last decade, there has been a great effort to bring mixed reality (MR) into the operating room to assist surgeons intraoperatively. However, progress towards this goal is still at an early stage. The aim of this paper is to propose a MR visualisation platform which projects multiple imaging modalities to assist intraoperative surgical guidance.MethodologyIn this work, a MR visualisation platform has been developed for the Microsoft HoloLens. The platform contains three visualisation components, namely a 3D organ model, volumetric data, and tissue morphology captured with intraoperative imaging modalities. Furthermore, a set of novel interactive functionalities have been designed including scrolling through volumetric data and adjustment of the virtual objects’ transparency. A pilot user study has been conducted to evaluate the usability of the proposed platform in the operating room. The participants were allowed to interact with the visualisation components and test the different functionalities. Each surgeon answered a questionnaire on the usability of the platform and provided their feedback and suggestions.ResultsThe analysis of the surgeons’ scores showed that the 3D model is the most popular MR visualisation component and neurosurgery is the most relevant speciality for this platform. The majority of the surgeons found the proposed visualisation platform intuitive and would use it in their operating rooms for intraoperative surgical guidance. Our platform has several promising potential clinical applications, including vascular neurosurgery.ConclusionThe presented pilot study verified the potential of the proposed visualisation platform and its usability in the operating room. Our future work will focus on enhancing the platform by incorporating the surgeons’ suggestions and conducting extensive evaluation on a large group of surgeons.

Journal article

Cartucho J, Tukra S, Li Y, Elson D, Giannarou Set al., 2020, VisionBlender: A Tool for Generating Computer Vision Datasets in Robotic Surgery (best paper award), Joint MICCAI 2020 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0)

Conference paper

Zhao L, Giannarou S, Lee SL, Yang GZet al., 2020, Real-Time Robust Simultaneous Catheter and Environment Modeling for Endovascular Navigation, Intravascular Ultrasound: From Acquisition to Advanced Quantitative Analysis, Pages: 185-197, ISBN: 9780128188330

Due to the complexity in catheter control and navigation, endovascular procedures are characterized by significant challenges. Real-time recovery of the 3D structure of the vasculature intraoperatively is necessary to visualize the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. Nonionizing imaging techniques such as intravascular ultrasound (IVUS) are increasingly used in vessel reconstruction approaches. To enable accurate recovery of vessel structures, this chapter presents a robust and real-time simultaneous catheter and environment modeling method for endovascular navigation based on IVUS imaging, electromagnetic (EM) sensing as well as the vessel structure information obtained from the preoperative CT/MR imaging. By considering the uncertainty in both the IVUS contour and the EM pose in the proposed nonlinear optimization problem, the proposed algorithm can provide accurate vessel reconstruction, at the same time deal with sensing errors and abrupt catheter motions. Experimental results using two different phantoms, with different catheter motions demonstrated the accuracy of the vessel reconstruction and the potential clinical value of the proposed vessel reconstruction method.

Book chapter

Huang B, Tsai Y-Y, Cartucho J, Tuch D, Giannarou S, Elson Det al., 2020, Tracking and Visualization of the Sensing Area for a Tethered Laparoscopic Gamma Probe, Information Processing in Computer Assisted Intervention (IPCAI)

Conference paper

Li Y, Charalampaki P, Liu Y, Yang G-Z, Giannarou Set al., 2018, Context aware decision support in neurosurgical oncology based on an efficient classification of endomicroscopic data, International Journal of Computer Assisted Radiology and Surgery, Vol: 13, Pages: 1187-1199, ISSN: 1861-6410

Purpose: Probe-based confocal laser endomicroscopy (pCLE) enables in vivo, in situ tissue characterisation without changes in the surgical setting and simplifies the oncological surgical workflow. The potential of this technique in identifying residual cancer tissue and improving resection rates of brain tumours has been recently verified in pilot studies. The interpretation of endomicroscopic information is challenging, particularly for surgeons who do not themselves routinely review histopathology. Also, the diagnosis can be examiner-dependent, leading to considerable inter-observer variability. Therefore, automatic tissue characterisation with pCLE would support the surgeon in establishing diagnosis as well as guide robot-assisted intervention procedures.Methods: The aim of this work is to propose a deep learning-based framework for brain tissue characterisation for context aware diagnosis support in neurosurgical oncology. An efficient representation of the context information of pCLE data is presented by exploring state-of-the-art CNN models with different tuning configurations. A novel video classification framework based on the combination of convolutional layers with long-range temporal recursion has been proposed to estimate the probability of each tumour class. The video classification accuracy is compared for different network architectures and data representation and video segmentation methods.Results: We demonstrate the application of the proposed deep learning framework to classify Glioblastoma and Meningioma brain tumours based on endomicroscopic data. Results show significant improvement of our proposed image classification framework over state-of-the-art feature-based methods. The use of video data further improves the classification performance, achieving accuracy equal to 99.49%.Conclusions: This work demonstrates that deep learning can provide an efficient representation of pCLE data and accurately classify Glioblastoma and Meningioma tumours. Th

Journal article

Triantafyllou P, Wisanuvej P, Giannarou S, Liu J, Yang G-Zet al., 2018, A Framework for Sensorless Tissue Motion Tracking in Robotic Endomicroscopy Scanning, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 2694-2699, ISSN: 1050-4729

Conference paper

Shen M, Giannarou S, Shah PL, Yang GZet al., 2017, Branch: Bifurcation recognition for airway navigation based on structural characteristics, MICCAI 2017, Publisher: Springer, Pages: 182-189, ISSN: 0302-9743

Bronchoscopic navigation is challenging, especially at the level of peripheral airways due to the complicated bronchial structures and the large respiratory motion. The aim of this paper is to propose a localisation approach tailored for navigation in the distal airway branches. Salient regions are detected on the depth maps of video images and CT virtual projections to extract anatomically meaningful areas that represent airway bifurcations. An airway descriptor based on shape context is introduced which encodes both the structural characteristics of the bifurcations and their spatial distribution. The bronchoscopic camera is localised in the airways by minimising the cost of matching the region features in video images to the pre-computed CT depth maps considering both the shape and temporal information. The method has been validated on phantom and in vivo data and the results verify its robustness to tissue deformation and good performance in distal airways.

Conference paper

Zhang L, Ye M, Giannarou S, Pratt P, Yang GZet al., 2017, Motion-compensated autonomous scanning for tumour localisation using intraoperative ultrasound, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 10434, Pages: 619-627, ISSN: 0302-9743

Intraoperative ultrasound facilitates localisation of tumour boundaries during minimally invasive procedures. Autonomous ultrasound scanning systems have been recently proposed to improve scanning accuracy and reduce surgeons’ cognitive load. However, current methods mainly consider static scanning environments typically with the probe pressing against the tissue surface. In this work, a motion-compensated autonomous ultrasound scanning system using the da Vinci® Research Kit (dVRK) is proposed. An optimal scanning trajectory is generated considering both the tissue surface shape and the ultrasound transducer dimensions. An effective vision-based approach is proposed to learn the underlying tissue motion characteristics. The learned motion model is then incorporated into the visual servoing framework. The proposed system has been validated with both phantom and ex vivo experiments.

Journal article

Maier-Hein L, Vedula SS, Speidel S, Navab N, Kikinis R, Park A, Eisenmann M, Feussner H, Forestier G, Giannarou S, Hashizume M, Katic D, Kenngott H, Kranzfelder M, Malpani A, Maerz K, Neumuth T, Padoy N, Pugh C, Schoch N, Stoyanov D, Taylor R, Wagner M, Hager GD, Jannin Pet al., 2017, Surgical data science for next-generation interventions, Nature Biomedical Engineering, Vol: 1, Pages: 691-696, ISSN: 2157-846X

Interventional healthcare will evolve from an artisanal craft based on the individual experiences, preferences and traditions of physicians into a discipline that relies on objective decision-making on the basis of large-scale data from heterogeneous sources.

Journal article

Zhao L, Giannarou S, Lee S, Yang GZet al., 2016, Registration-free simultaneous catheter and environment modelling, Medical Image Computing and Computer Assisted Intervention (MICCAI) 2016, Publisher: Springer

Endovascular procedures are challenging to perform due tothe complexity and difficulty in catheter manipulation. The simultaneousrecovery of the 3D structure of the vasculature and the catheter posi-tion and orientation intra-operatively is necessary in catheter controland navigation. State-of-art Simultaneous Catheter and EnvironmentModelling provides robust and real-time 3D vessel reconstruction based on real-time intravascular ultrasound (IVUS) imaging and electromagnetic (EM) sensing, but still relies on accurate registration between EM and pre-operative data. In this paper, a registration-free vessel reconstruction method is proposed for endovascular navigation. In the optimisation framework, the EM-CT registration is estimated and updated intra-operatively together with the 3D vessel reconstruction from IVUS, EM and pre-operative data, and thus does not require explicit registration. The proposed algorithm can also deal with global (patient) motion and periodic deformation caused by cardiac motion. Phantom and in-vivo experiments validate the accuracy of the algorithm and the resultsdemonstrate the potential clinical value of the technique.

Conference paper

Ye M, Zhang L, Giannarou S, Yang G-Zet al., 2016, Real-Time 3D Tracking of Articulated Tools for Robotic Surgery, International Conference on Medical Image Computing and Computer-Assisted Intervention – MICCAI 2016, Publisher: Springer, Pages: 386-394, ISSN: 0302-9743

In robotic surgery, tool tracking is important for providingsafe tool-tissue interaction and facilitating surgical skills assessment. De-spite recent advances in tool tracking, existing approaches are faced withmajor difficulties in real-time tracking of articulated tools. Most algo-rithms are tailored for offline processing with pre-recordedvideos. In thispaper, we propose a real-time 3D tracking method for articulated toolsin robotic surgery. The proposed method is based on the CAD modelof the tools as well as robot kinematics to generate online part-basedtemplates for efficient 2D matching and 3D pose estimation. A robustverification approach is incorporated to reject outliers in2D detections,which is then followed by fusing inliers with robot kinematic readingsfor 3D pose estimation of the tool. The proposed method has been val-idated with phantom data, as well asex vivoandin vivoexperiments.The results derived clearly demonstrate the performance advantage ofthe proposed method when compared to the state-of-the-art.

Conference paper

Vander Poorten E, Tran P, Devreker A, Gruijthuijsen C, Portoles-Diez S, Smoljkic G, Strbac V, Famaey N, Reynaerts D, Vander Sloten J, Tibebu A, Yu B, Rauch C, Bernard F, Kassahun Y, Metzen JH, Giannarou S, Zhao L, Lee S, Yang G, Mazomenos E, Chang P, Stoyanov D, Kvasnytsia M, Van Deun J, Verhoelst E, Sette M, Di Iasio A, Leo G, Hertner F, Scherly D, Chelini L, Häni N, Seatovic D, Rosa B, De Praetere H, Herijgers Pet al., 2016, Cognitive Autonomous Catheters Operating in Dynamic Environments, Journal of Medical Robotics Research, Vol: 01, ISSN: 2424-905X

Advances in miniaturized surgical instrumentation are key to less demanding and safer medical interventions. In cardiovascular procedures interventionalists turn towards catheter-based interventions, treating patients considered unfit for more invasive approaches. A positive outcome is not guaranteed. The risk for calcium dislodgement, tissue damage or even vessel rupture cannot be eliminated when instruments are maneuvered through fragile and diseased vessels. This paper reports on the progress made in terms of catheter design, vessel reconstruction, catheter shape modeling, surgical skill analysis, decision making and control. These efforts are geared towards the development of the necessary technology to autonomously steer catheters through the vasculature, a target of the EU-funded project Cognitive AutonomouS CAtheters operating in Dynamic Environments (CASCADE). Whereas autonomous placement of an aortic valve implant forms the ultimate and concrete goal, the technology of individual building blocks to reachsuch ambitious goal is expected to be much sooner impacting and assisting interventionalists in their daily clinical practice.

Journal article

Zhao L, Giannarou S, Lee S, Yang GZet al., 2016, SCEM+: real-time robust simultaneous catheter and environment modeling for endovascular navigation, IEEE Robotics and Automation Letters, Vol: 1, Pages: 961-968, ISSN: 2377-3766

Endovascular procedures are characterised by significant challenges mainly due to the complexity in catheter control and navigation. Real-time recovery of the 3-D structure of the vasculature is necessary to visualise the interaction between the catheter and its surrounding environment to facilitate catheter manipulations. State-of-the-art intraoperative vessel reconstruction approaches are increasingly relying on nonionising imaging techniques such as optical coherence tomography (OCT) and intravascular ultrasound (IVUS). To enable accurate recovery of vessel structures and to deal with sensing errors and abrupt catheter motions, this letter presents a robust and real-time vessel reconstruction scheme for endovascular navigation based on IVUS and electromagnetic (EM) tracking. It is formulated as a nonlinear optimisation problem, which considers the uncertainty in both the IVUS contour and the EM pose, as well as vessel morphology provided by preoperative data. Detailed phantom validation is performed and the results demonstrate the potential clinical value of the technique.

Journal article

Zhao L, Giannarou S, Lee S, Merrifield R, Yang GZet al., 2016, Intra-operative simultaneous catheter and environment modelling for endovascular navigation based on intravascular ultrasound, electromagnetic tracking and pre-operative data, The Hamlyn Symposium on Medical Robotics, Publisher: The Hamlyn Symposium on Medical Robotics, Pages: 76-77

Conference paper

Giannarou S, Ye M, Gras G, Leibrandt K, Marcus HJ, Yang GZet al., 2016, Vision-based deformation recovery for intraoperative force estimation of tool–tissue interaction for neurosurgery, International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 929-936, ISSN: 1861-6410

Purpose: In microsurgery, accurate recovery of the deformation of the surgical environment is important for mitigating the risk of inadvertent tissue damage and avoiding instrument maneuvers that may cause injury. The analysis of intraoperative microscopic data can allow the estimation of tissue deformation and provide to the surgeon useful feedback on the instrument forces exerted on the tissue. In practice, vision-based recovery of tissue deformation during tool–tissue interaction can be challenging due to tissue elasticity and unpredictable motion.Methods: The aim of this work is to propose an approach for deformation recovery based on quasi-dense 3D stereo reconstruction. The proposed framework incorporates a new stereo correspondence method for estimating the underlying 3D structure. Probabilistic tracking and surface mapping are used to estimate 3D point correspondences across time and recover localized tissue deformations in the surgical site.Results: We demonstrate the application of this method to estimating forces exerted on tissue surfaces. A clinically relevant experimental setup was used to validate the proposed framework on phantom data. The quantitative and qualitative performance evaluation results show that the proposed 3D stereo reconstruction and deformation recovery methods achieve submillimeter accuracy. The force–displacement model also provides accurate estimates of the exerted forces.Conclusions: A novel approach for tissue deformation recovery has been proposed based on reliable quasi-dense stereo correspondences. The proposed framework does not rely on additional equipment, allowing seamless integration with the existing surgical workflow. The performance evaluation analysis shows the potential clinical value of the technique.

Journal article

Ye M, Giannarou S, Meining A, Yang G-Zet al., 2016, Online tracking and retargeting with applications to optical biopsy in gastrointestinal endoscopic examinations, Medical Image Analysis, Vol: 30, Pages: 144-157, ISSN: 1361-8415

With recent advances in biophotonics, techniques such as narrow band imaging, confocal laser endomicroscopy, fluorescence spectroscopy, and optical coherence tomography, can be combined with normal white-light endoscopes to provide in vivo microscopic tissue characterisation, potentially avoiding the need for offline histological analysis. Despite the advantages of these techniques to provide online optical biopsy in situ, it is challenging for gastroenterologists to retarget the optical biopsy sites during endoscopic examinations. This is because optical biopsy does not leave any mark on the tissue. Furthermore, typical endoscopic cameras only have a limited field-of-view and the biopsy sites often enter or exit the camera view as the endoscope moves. In this paper, a framework for online tracking and retargeting is proposed based on the concept of tracking-by-detection. An online detection cascade is proposed where a random binary descriptor using Haar-like features is included as a random forest classifier. For robust retargeting, we have also proposed a RANSAC-based location verification component that incorporates shape context. The proposed detection cascade can be readily integrated with other temporal trackers. Detailed performance evaluation on in vivo gastrointestinal video sequences demonstrates the performance advantage of the proposed method over the current state-of-the-art.

Journal article

Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, Poorten EVet al., 2016, Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions (vol 11, pg 553, 2016), International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 847-847, ISSN: 1861-6410

Journal article

Kassahun Y, Yu B, Tibebu AT, Stoyanov D, Giannarou S, Metzen JH, Vander Poorten Eet al., 2015, Surgical robotics beyond enhanced dexterity instrumentation: a survey of machine learning techniques and their role in intelligent and autonomous surgical actions, International Journal of Computer Assisted Radiology and Surgery, Vol: 11, Pages: 553-568, ISSN: 1861-6410

PurposeAdvances in technology and computing play an increasingly important role in the evolution of modern surgical techniques and paradigms. This article reviews the current role of machine learning (ML) techniques in the context of surgery with a focus on surgical robotics (SR). Also, we provide a perspective on the future possibilities for enhancing the effectiveness of procedures by integrating ML in the operating room.MethodsThe review is focused on ML techniques directly applied to surgery, surgical robotics, surgical training and assessment. The widespread use of ML methods in diagnosis and medical image computing is beyond the scope of the review. Searches were performed on PubMed and IEEE Explore using combinations of keywords: ML, surgery, robotics, surgical and medical robotics, skill learning, skill analysis and learning to perceive.ResultsStudies making use of ML methods in the context of surgery are increasingly being reported. In particular, there is an increasing interest in using ML for developing tools to understand and model surgical skill and competence or to extract surgical workflow. Many researchers begin to integrate this understanding into the control of recent surgical robots and devices.ConclusionML is an expanding field. It is popular as it allows efficient processing of vast amounts of data for interpreting and real-time decision making. Already widely used in imaging and diagnosis, it is believed that ML will also play an important role in surgery and interventional treatments. In particular, ML could become a game changer into the conception of cognitive surgical robots. Such robots endowed with cognitive skills would assist the surgical team also on a cognitive level, such as possibly lowering the mental load of the team. For example, ML could help extracting surgical skill, learned through demonstration by human experts, and could transfer this to robotic skills. Such intelligent surgical assistance would significantly surpass the st

Journal article

Keir GJ, Nair A, Giannarou S, Yang G-Z, Oldershaw P, Wort SJ, MacDonald P, Hansell DM, Wells AUet al., 2015, Pulmonary vasospasm in systemic sclerosis: noninvasive techniques for detection, Pulmonary Circulation, Vol: 5, Pages: 498-505, ISSN: 2045-8940

In a subgroup of patients with systemic sclerosis (SSc), vasospasm affecting the pulmonary circulation may contribute to worsening respiratory symptoms, including dyspnea. Noninvasive assessment of pulmonary blood flow (PBF), utilizing inert-gas rebreathing (IGR) and dual-energy computed-tomography pulmonary angiography (DE-CTPA), may be useful for identifying pulmonary vasospasm. Thirty-one participants (22 SSc patients and 9 healthy volunteers) underwent PBF assessment with IGR and DE-CTPA at baseline and after provocation with a cold-air inhalation challenge (CACh). Before the study investigations, participants were assigned to subgroups: group A included SSc patients who reported increased breathlessness after exposure to cold air (n = 11), group B included SSc patients without cold-air sensitivity (n = 11), and group C patients included the healthy volunteers. Median change in PBF from baseline was compared between groups A, B, and C after CACh. Compared with groups B and C, in group A there was a significant decline in median PBF from baseline at 10 minutes (−10%; range: −52.2% to 4.0%; P < 0.01), 20 minutes (−17.4%; −27.9% to 0.0%; P < 0.01), and 30 minutes (−8.5%; −34.4% to 2.0%; P < 0.01) after CACh. There was no significant difference in median PBF change between groups B or C at any time point and no change in pulmonary perfusion on DE-CTPA. Reduction in pulmonary blood flow following CACh suggests that pulmonary vasospasm may be present in a subgroup of patients with SSc and may contribute to worsening dyspnea on exposure to cold.

Journal article

Shen M, Giannarou S, Yang G-Z, 2015, Robust camera localisation with depth reconstruction for bronchoscopic navigation, International Journal of Computer Assisted Radiology and Surgery, Vol: 10, Pages: 801-813, ISSN: 1861-6410

PurposeBronchoscopy is a standard technique for airway examination, providing a minimally invasive approach for both diagnosis and treatment of pulmonary diseases. To target lesions identified pre-operatively, it is necessary to register the location of the bronchoscope to the CT bronchial model during the examination. Existing vision-based techniques rely on the registration between virtually rendered endobronchial images and videos based on image intensity or surface geometry. However, intensity-based approaches are sensitive to illumination artefacts, while gradient-based approaches are vulnerable to surface texture.MethodsIn this paper, depth information is employed in a novel way to achieve continuous and robust camera localisation. Surface shading has been used to recover depth from endobronchial images. The pose of the bronchoscopic camera is estimated by maximising the similarity between the depth recovered from a video image and that captured from a virtual camera projection of the CT model. The normalised cross-correlation and mutual information have both been used and compared for the similarity measure.ResultsThe proposed depth-based tracking approach has been validated on both phantom and in vivo data. It outperforms the existing vision-based registration methods resulting in smaller pose estimation error of the bronchoscopic camera. It is shown that the proposed approach is more robust to illumination artefacts and surface texture and less sensitive to camera pose initialisation.ConclusionsA reliable camera localisation technique has been proposed based on depth information for bronchoscopic navigation. Qualitative and quantitative performance evaluations show the clinical value of the proposed framework.

Journal article

Ye M, 2015, Method and Apparatus, WO/2015/033147

Patent

Ye M, Johns E, Giannarou S, Yang G-Zet al., 2014, Online Scene Association for Endoscopic Navigation, 17th International Conference MICCAI 2014, Publisher: Springer International Publishing, Pages: 316-323, ISSN: 0302-9743

Endoscopic surveillance is a widely used method for moni-toring abnormal changes in the gastrointestinal tract such as Barrett'sesophagus. Direct visual assessment, however, is both time consumingand error prone, as it involves manual labelling of abnormalities on alarge set of images. To assist surveillance, this paper proposes an onlinescene association scheme to summarise an endoscopic video into scenes,on-the-y. This provides scene clustering based on visual contents, andalso facilitates topological localisation during navigation. The proposedmethod is based on tracking and detection of visual landmarks on thetissue surface. A generative model is proposed for online learning of pair-wise geometrical relationships between landmarks. This enables robustdetection of landmarks and scene association under tissue deformation.Detailed experimental comparison and validation have been conductedon in vivo endoscopic videos to demonstrate the practical value of ourapproach.

Conference paper

Ye M, Johns E, Giannarou S, Yang GZet al., 2014, Online scene association for endoscopic navigation, Pages: 316-323

Endoscopic surveillance is a widely used method for monitoring abnormal changes in the gastrointestinal tract such as Barrett's esophagus. Direct visual assessment, however, is both time consuming and error prone, as it involves manual labelling of abnormalities on a large set of images. To assist surveillance, this paper proposes an online scene association scheme to summarise an endoscopic video into scenes, on-the-fly. This provides scene clustering based on visual contents, and also facilitates topological localisation during navigation. The proposed method is based on tracking and detection of visual landmarks on the tissue surface. A generative model is proposed for online learning of pairwise geometrical relationships between landmarks. This enables robust detection of landmarks and scene association under tissue deformation. Detailed experimental comparison and validation have been conducted on in vivo endoscopic videos to demonstrate the practical value of our approach.

Conference paper

Giannarou S, Gruijthuijsen C, Yang G-Z, 2014, Modeling and Recognition of Ongoing Surgical Gestures in TAVI Procedures

Conference paper

Shi C, Giannarou S, Lee S-L, Yang G-Zet al., 2014, Simultaneous Catheter and Environment Modeling for Trans-catheter Aortic Valve Implantation

Conference paper

Ye M, Giannarou S, Patel N, Teare J, Yang G-Zet al., 2013, Pathological Site Retargeting under Tissue Deformation Using Geometrical Association and Tracking, 16th International Conference on MICCAI 2013, Publisher: Springer Berlin Heidelberg, Pages: 67-74, ISSN: 0302-9743

Recent advances in microscopic detection techniques includefluorescence spectroscopy, fibred confocal microscopy and optical coher-ence tomography. These methods can be integrated with miniaturisedprobes to assist endoscopy, thus enabling diseases to be detected at anearly and pre-invasive stage, forgoing the need for histopathological sam-ples and off-line analysis. Since optical-based biopsy does not leave vis-ible marks after sampling, it is important to track the biopsy sites toenable accurate retargeting and subsequent serial examination. In thispaper, a novel approach is proposed for pathological site retargeting ingastroscopic examinations. The proposed method is based on affine defor-mation modelling with geometrical association combined with cascadedonline learning and tracking. It provides online in vivo retargeting, and is able to track pathological sites in the presence of tissue deformation. It is also robust to partial occlusions and can be applied to a range of imaging probes including confocal laser endomicroscopy.

Conference paper

Elmikaty M, Stathaki T, Kimber P, Giannarou Set al., 2013, A novel two-level shape descriptor for pedestrian detection, SSPD 2012

The demand for pedestrian detection and tracking algorithms is rapidly increasing with applications in security systems, human computer interaction and human activity analysis. A pedestrian is a person standing in an upright position. Previous work involves using various types of image descriptors to detect humans. However, the existing approaches, although exhibit low misdetection rate, result in high rate of false alarms in the case of complex image backgrounds. In this work, a novel approach for pedestrian detection is proposed which is based on the combined use of two object detection approaches with the aim of reducing the false alarm rate of the individual detectors. These are the Histogram of Oriented Gradients (HOG) and a Shape Context based object detector (SC). Preliminary results are very encouraging and demonstrate clearly the ability of the proposed system to reduce the number of false alarms without significant increase in the processing time.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00396337&limit=30&person=true&page=2&respub-action=search.html