Research in surgical robotics has an established track record at Imperial College, and a number of research and commercial surgical robot platforms have been developed over the years. The Hamlyn Centre is a champion for technological innovation and clinical adoption of robotic, minimally invasive surgery. We work in partnership with major industrial leaders in medical devices and surgical robots, as well as developing our own platforms such as the i-Snake® and Micro-IGES platforms. The Da Vinci surgical robot is used extensively for endoscopic radical prostatectomy, hiatal hernia surgery, and low pelvic and rectal surgery, and in 2003, St Mary’s Hospital carried out its first Totally Endoscopic Robotic Coronary Artery Bypass (TECAB).

The major focus of the Hamlyn Centre is to develop robotic technologies that will transform conventional minimally invasive surgery, explore new ways of empowering robots with human intelligence, and develop[ing miniature 'microbots' with integrated sensing and imaging for targeted therapy and treatment. We work closely with both industrial and academic partners in open platforms such as the DVRK, RAVEN and KUKA. The Centre also has the important mission of driving down costs associated with robotic surgery in order to make the technology more accessible, portable, and affordable. This will allow it to be fully integrated with normal surgical workflows so as to benefit a much wider patient population.

The Hamlyn Centre currently chairs the UK Robotics and Autonomous Systems (UK-RAS) Network. The mission of the Network is to to provide academic leadership in Robotics and Autonomous Systems (RAS), expand collaboration with industry and integrate and coordinate activities across the UK Engineering and Physical Sciences Research Council (EPSRC) funded RAS capital facilities and Centres for Doctoral Training (CDTs).

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

    Zhang L, Ye M, Giataganas P, Hughes M, Bradu A, Podoleanu A, Yang G-Zet al., 2017,

    From Macro to Micro Autonomous Multiscale Image Fusion for Robotic Surgery

    , IEEE ROBOTICS & AUTOMATION MAGAZINE, Vol: 24, Pages: 63-72, ISSN: 1070-9932
    Zhang L, Ye M, Giataganas P, Hughes M, Yang GZet al., 2017,

    Autonomous scanning for endomicroscopic mosaicing and 3D fusion

    , Proceedings - IEEE International Conference on Robotics and Automation, Pages: 3587-3593, ISSN: 1050-4729

    © 2017 IEEE. Robot-assisted minimally invasive surgery can benefit from the automation of common, repetitive or well-defined but ergonomically difficult tasks. One such task is the scanning of a pick-up endomicroscopy probe over a complex, undulating tissue surface to enhance the effective field-of-view through video mosaicing. In this paper, the da Vinci® surgical robot, through the dVRK framework, is used for autonomous scanning and 2D mosaicing over a user-defined region of interest. To achieve the level of precision required for high quality mosaic generation, which relies on sufficient overlap between consecutive image frames, visual servoing is performed using a combination of a tracking marker attached to the probe and the endomicroscopy images themselves. The resulting sub-millimetre accuracy of the probe motion allows for the generation of large mosaics with minimal intervention from the surgeon. Images are streamed from the endomicroscope and overlaid live onto the surgeons view, while 2D mosaics are generated in real-time, and fused into a 3D stereo reconstruction of the surgical scene, thus providing intuitive visualisation and fusion of the multi-scale images. The system therefore offers significant potential to enhance surgical procedures, by providing the operator with cellular-scale information over a larger area than could typically be achieved by manual scanning.

    Andreu Perez J, Cao F, Hagras H, Yang Get al., 2016,

    A self-adaptive online brain machine interface of a humanoid robot through a general type-2 fuzzy inference system

    , IEEE Transactions on Fuzzy Systems, ISSN: 1941-0034

    This paper presents a self-adaptive general type-2 fuzzy inference system (GT2 FIS) for online motor imagery (MI) decoding to build a brain-machine interface (BMI) and navigate a bi-pedal humanoid robot in a real experiment, using EEG brain recordings only. GT2 FISs are applied to BMI for the first time in this study. We also account for several constraints commonly associated with BMI in real practice: 1) maximum number ofelectroencephalography (EEG) channels is limited and fixed, 2) no possibility of performing repeated user training sessions, and 3) desirable use of unsupervised and low complexity features extraction methods. The novel learning method presented in this paper consists of a self-adaptive GT2 FIS that can both incrementally update its parameters and evolve (a.k.a. self-adapt) its structure via creation, fusion and scaling of the fuzzy system rules in an online BMI experiment with a real robot. The structureidentification is based on an online GT2 Gath-Geva algorithm where every MI decoding class can be represented by multiple fuzzy rules (models). The effectiveness of the proposed method is demonstrated in a detailed BMI experiment where 15 untrained users were able to accurately interface with a humanoid robot, in a single thirty-minute experiment, using signals from six EEG electrodes only.

    Giannarou S, Ye M, Gras G, Leibrandt K, Marcus HJ, Yang G-Zet al., 2016,

    Vision-based deformation recovery for intraoperative force estimation of tool-tissue interaction for neurosurgery

    Grammatikopoulou M, Leibrandt K, Yang G, 2016,

    Motor channelling for safe and effective dynamic constraints in Minimally Invasive Surgery

    , 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

    Motor channelling is a concept to provide na-vigation and sensory feedback to operators in master-slavesurgical setups. It is beneficial since the introduction of roboticsurgery creates a physical separation between the surgeonand patient anatomy. Active Constraints/Virtual Fixtures areproposed which integrate Guidance and Forbidden RegionConstraints into a unified control framework. The developedapproach provides guidance and safe manipulation to improveprecision and reduce the risk of inadvertent tissue damage.Online three-degree-of-freedom motion prediction and compen-sation of the target anatomy is performed to complement themaster constraints. The presented Active Constraints conceptis applied to two clinical scenarios; surface scanning forin situmedical imaging and vessel manipulation in cardiacsurgery. The proposed motor channelling control strategy isimplemented on the da Vinci Surgical System using the da VinciResearch Kit (dVRK) and its effectiveness is demonstratedthrough a detailed user study.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=759&limit=5&page=2&respub-action=search.html Current Millis: 1537418217216 Current Time: Thu Sep 20 05:36:57 BST 2018