The Centre has a long history of developing new techniques for medical imaging (particularly in magnetic resonance imaging), transforming them from a primarily diagnostic modality into an interventional and therapeutic platform. This is facilitated by the Centre's strong engineering background in practical imaging and image analysis platform development, as well as advances in minimal access and robotic assisted surgery. Hamlyn has a strong tradition in pursuing basic sciences and theoretical research, with a clear focus on clinical translation.

In response to the current paradigm shift and clinical demand in bringing cellular and molecular imaging modalities to an in vivo – in situ setting during surgical intervention, our recent research has also been focussed on novel biophotonics platforms that can be used for real-time tissue characterisation, functional assessment, and intraoperative guidance during minimally invasive surgery. This includes, for example, SMART confocal laser endomicroscopy, time-resolved fluorescence spectroscopy and flexible FLIM catheters.


Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Zhang L, Ye M, Giataganas P, Hughes M, Bradu A, Podoleanu A, Yang Get al., 2017,

    From macro to micro: autonomous multiscale image fusion for robotic surgery

    , IEEE Robotics & Automation Magazine, Vol: 24, Pages: 63-72, ISSN: 1070-9932

    In recent years, minimally invasive robotic surgery has shown great promises for enhancing surgical precision and improving patient outcomes. Despite these advances, intraoperative tissue characterisation (such as the identification of cancerous tissue) still relies on traditional biopsy and histology, a process that is time-consuming and often disrupts the normal surgical workflow. In order to provide effective intra-operative decision-making, emerging optical biopsy techniques, such as probe based confocal laser endomicroscopy (pCLE) and optical coherence tomography (OCT), have been developed to provide real-time in vivo, in situ assessment of tissue micro-structures. Clinical deployment of these techniques, however, requires large area surveillance, from macro (mm/cm) to micro (µm) coverage in order to differentiate underlying tissue structures. This article provides a real-time multi-scale fusion scheme for robotic surgery. It demonstrates how the da Vinci surgical robot, used together with the da Vinci Research Kit, can be used for automated 2D scanning of pCLE/OCT probes, providing large area tissue surveillance by image stitching. Open-loop control of the robot provides insufficient precision for probe scanning, and therefore the motion is visually servoed using the live pCLE images (for lateral position) and OCT images (for axial position). The resulting tissue maps can then be fused in real-time with a stereo reconstruction from the laparoscopic video, providing the surgeon with a multi-scale 3D view of the operating site.

  • Journal article
    Feng Y, Guo Z, Dong Z, Zhou X, Kwok K, Ernst S, Lee Set al., 2017,

    An efficient cardiac mapping strategy for radiofrequency catheter ablation with active learning

    , International Journal of Computer Assisted Radiology and Surgery, Vol: 12, Pages: 199-1207, ISSN: 1861-6410

    ObjectiveA major challenge in radiofrequency catheter ablation procedure(RFCA) is the voltage and activation mapping of the endocardium, given a limitedmapping time. By learning from expert interventional electrophysiologists (operator),while also making use of an active-learning framework, guidance on performing car-diac voltage mapping can be provided to novice operators, or even directly to catheterrobots.MethodsA Learning from Demonstration (LfD) framework, based upon previous car-diac mapping procedures performed by an expert operator, in conjunction with Gaus-sian process (GP) model-based active learning, was developed to efficiently performvoltage mapping over right ventricles (RV). The GP model was used to output thenext best mapping point, while getting updated towards the underlying voltage datapattern, as more mapping points are taken. A regularized particle filter was used tokeep track of the kernel hyperparameter used by GP. The travel cost of the cathetertip was incorporated to produce time-efficient mapping sequences.ResultsThe proposed strategy was validated on a simulated 2D grid mapping task,with leave-one-out experiments on 25 retrospective datasets, in an RV phantom usingthe Stereotaxis NiobeR©remote magnetic navigation system, and on a tele-operatedcatheter robot. In comparison to an existing geometry-based method, regression errorwas reduced, and was minimized at a faster rate over retrospective procedure data.ConclusionA new method of catheter mapping guidance has been proposed based onLfD and active learning. The proposed method provides real-time guidance for theprocedure, as well as a live evaluation of mapping sufficiency.

  • Journal article
    Vyas K, hughes M, leff DANIEL, yang GUANGet al., 2017,

    Methylene-blue aided rapid confocal laser endomicroscopy of breast cancer

    , Journal of Biomedical Optics, Vol: 22, ISSN: 1083-3668

    Breast conserving surgery allows complete tumor resection while maintaining acceptable cosmesis for patients. Safe and rapid intraoperative margin assessment during the procedure is important to establish the completeness of tumor excision and minimizes the need for reoperation. Confocal laser endomicroscopy has demonstrated promise for real-time intraoperative margin assessment using acriflavine staining, but it is not approved for routine in-human use. We describe a custom high-speed line-scan confocal laser endomicroscopy (LS-CLE) system at 660 nm that enables high-resolution histomorphological imaging of breast tissue stained with methylene-blue, an alternative fluorescent stain for localizing sentinel nodes during breast surgery. Preliminary imaging results on freshly excised human breast tissue specimens are presented, demonstrating the potential of methylene-blue aided rapid LS-CLE to determine the oncological status of surgical margins in-vivo.

  • Book
    Balocco S, Zuluaga M, Zahnd G, Lee S, Demirci Set al., 2016,

    Computing and Visualization for Intravascular Imaging and Computer-Assisted Stenting, 1st Edition

    , Publisher: Elsevier

    Computing and Visualization for Intravascular Imaging and Computer-Assisted Stenting presents imaging, treatment, and computed assisted technological techniques for diagnostic and intraoperative vascular imaging and stenting. These techniques offer increasingly useful information on vascular anatomy and function, and are poised to have a dramatic impact on the diagnosis, analysis, modeling, and treatment of vascular diseases.After setting out the technical and clinical challenges of vascular imaging and stenting, the book gives a concise overview of the basics before presenting state-of-the-art methods for solving these challenges.Readers will learn about the main challenges in endovascular procedures, along with new applications of intravascular imaging and the latest advances in computer assisted stenting.

  • Conference paper
    Huang B, Vandini A, Hu Y, Lee S, Yang Get al., 2016,

    A Vision-guided Dual Arm Sewing System for Stent Graft Manufacturing

    , IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

    This paper presents an intelligent sewing systemfor personalized stent graft manufacturing, a challengingsewing task that is currently performed manually. Inspired bymedical suturing robots, we have adopted a single-sided sewingtechnique using a curved needle to perform the task of sewingstents onto fabric. A motorized surgical needle driver wasattached to a 7 d.o.f robot arm to manipulate the needle with asecond robot controlling the position of the mandrel. A learningfrom-demonstrationapproach was used to program the robotto sew stents onto fabric. The demonstrated sewing skill wassegmented to several phases, each of which was encoded witha Gaussian Mixture Model. Generalized sewing movementswere then generated from these models and were used for taskexecution. During execution, a stereo vision system was adoptedto guide the robots to adjust the learnt movements accordingto the needle pose. Two experiments are presented here withthis system and the results show that our system can robustlyperform the sewing task as well as adapt to various needleposes. The accuracy of the sewing system was within 2mm.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=757&limit=5&respub-action=search.html Current Millis: 1568957440755 Current Time: Fri Sep 20 06:30:40 BST 2019