Research in surgical robotics has an established track record at Imperial College, and a number of research and commercial surgical robot platforms have been developed over the years. The Hamlyn Centre is a champion for technological innovation and clinical adoption of robotic, minimally invasive surgery. We work in partnership with major industrial leaders in medical devices and surgical robots, as well as developing our own platforms such as the i-Snake® and Micro-IGES platforms. The Da Vinci surgical robot is used extensively for endoscopic radical prostatectomy, hiatal hernia surgery, and low pelvic and rectal surgery, and in 2003, St Mary’s Hospital carried out its first Totally Endoscopic Robotic Coronary Artery Bypass (TECAB).

The major focus of the Hamlyn Centre is to develop robotic technologies that will transform conventional minimally invasive surgery, explore new ways of empowering robots with human intelligence, and develop[ing miniature 'microbots' with integrated sensing and imaging for targeted therapy and treatment. We work closely with both industrial and academic partners in open platforms such as the DVRK, RAVEN and KUKA. The Centre also has the important mission of driving down costs associated with robotic surgery in order to make the technology more accessible, portable, and affordable. This will allow it to be fully integrated with normal surgical workflows so as to benefit a much wider patient population.

The Hamlyn Centre currently chairs the UK Robotics and Autonomous Systems (UK-RAS) Network. The mission of the Network is to to provide academic leadership in Robotics and Autonomous Systems (RAS), expand collaboration with industry and integrate and coordinate activities across the UK Engineering and Physical Sciences Research Council (EPSRC) funded RAS capital facilities and Centres for Doctoral Training (CDTs).


Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Barbot A, Wales D, Yeatman E, Yang GZet al., 2021,

    Microfluidics at fibre tip for nanolitre delivery and sampling

    , Advanced Science, Vol: 8, Pages: 1-10, ISSN: 2198-3844

    Delivery and sampling nanolitre volumes of liquid can benefit new invasive surgical procedures.However, the dead volume and difficulty in generating constant pressure flow limits the use of small tubes such as capillaries.This work demonstrates sub-millimetre microfluidic chips assembled directly on the tip of a bundle of two hydrophobic coated 100 μm capillaries to deliver nanolitre droplets in liquid environments.Droplets are created in a specially designed nanopipette and propelled by gas through the capillary to the microfluidic chip where a passive valve mechanism separates liquid from gas, allowing their delivery.By adjusting the driving pressure and microfluidic geometry we demonstrate both partial and full delivery of 10 nanolitre droplets with 0.4 nanolitre maximum error, as well as sampling from the environment.This system will enable drug delivery and sampling with minimally invasive probes, facilitating continuous liquid biopsy for disease monitoring and in-vivo drug screening.

  • Journal article
    Gao A, Murphy RR, Chen W, Dagnino G, Fischer P, Gutierrez MG, Kundrat D, Nelson BJ, Shamsudhin N, Su H, Xia J, Zemmar A, Zhang D, Wang C, Yang G-Zet al., 2021,

    Progress in robotics for combating infectious diseases

    , SCIENCE ROBOTICS, Vol: 6, ISSN: 2470-9476
  • Conference paper
    Zhang D, Wang R, Lo B, 2021,

    Surgical gesture recognition based on bidirectional multi-layer independently RNN with explainable spatial feature extraction

    , IEEE International Conference on Robotics and Automation (ICRA) 2021, Publisher: IEEE

    Minimally invasive surgery mainly consists of a series of sub-tasks, which can be decomposed into basic gestures or contexts. As a prerequisite of autonomic operation, surgical gesture recognition can assist motion planning and decision-making, and build up context-aware knowledge to improve the surgical robot control quality. In this work, we aim to develop an effective surgical gesture recognition approach with an explainable feature extraction process. A Bidirectional Multi-Layer independently RNN (BML-indRNN) model is proposed in this paper, while spatial feature extraction is implemented via fine-tuning of a Deep Convolutional Neural Network (DCNN) model constructed based on the VGG architecture. To eliminate the black-box effects of DCNN, Gradient-weighted Class Activation Mapping (Grad-CAM) is employed. It can provide explainable results by showing the regions of the surgical images that have a strong relationship with the surgical gesture classification results. The proposed method was evaluated based on the suturing task with data obtained from the public available JIGSAWS database. Comparative studies were conducted to verify the pro-posed framework. Results indicated that the testing accuracy for the suturing task based on our proposed method is 87.13%,which outperforms most of the state-of-the-art algorithms

  • Journal article
    Gu X, Guo Y, Deligianni F, Lo B, Yang G-Zet al., 2021,

    Cross-subject and cross-modal transfer for generalized abnormal gait pattern recognition

    , IEEE Transactions on Neural Networks and Learning Systems, Vol: 32, Pages: 546-560, ISSN: 1045-9227

    For abnormal gait recognition, pattern-specific features indicating abnormalities are interleaved with the subject-specific differences representing biometric traits. Deep representations are, therefore, prone to overfitting, and the models derived cannot generalize well to new subjects. Furthermore, there is limited availability of abnormal gait data obtained from precise Motion Capture (Mocap) systems because of regulatory issues and slow adaptation of new technologies in health care. On the other hand, data captured from markerless vision sensors or wearable sensors can be obtained in home environments, but noises from such devices may prevent the effective extraction of relevant features. To address these challenges, we propose a cascade of deep architectures that can encode cross-modal and cross-subject transfer for abnormal gait recognition. Cross-modal transfer maps noisy data obtained from RGBD and wearable sensors to accurate 4-D representations of the lower limb and joints obtained from the Mocap system. Subsequently, cross-subject transfer allows disentangling subject-specific from abnormal pattern-specific gait features based on a multiencoder autoencoder architecture. To validate the proposed methodology, we obtained multimodal gait data based on a multicamera motion capture system along with synchronized recordings of electromyography (EMG) data and 4-D skeleton data extracted from a single RGBD camera. Classification accuracy was improved significantly in both Mocap and noisy modalities.

  • Journal article
    Kassanos P, Seichepine F, Yang G-Z, 2021,

    A Comparison of Front-End Amplifiers for Tetrapolar Bioimpedance Measurements

    , IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, Vol: 70, ISSN: 0018-9456

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=759&limit=5&respub-action=search.html Current Millis: 1632338399844 Current Time: Wed Sep 22 20:19:59 BST 2021