The Centre has a long history of developing new techniques for medical imaging (particularly in magnetic resonance imaging), transforming them from a primarily diagnostic modality into an interventional and therapeutic platform. This is facilitated by the Centre's strong engineering background in practical imaging and image analysis platform development, as well as advances in minimal access and robotic assisted surgery. Hamlyn has a strong tradition in pursuing basic sciences and theoretical research, with a clear focus on clinical translation.

In response to the current paradigm shift and clinical demand in bringing cellular and molecular imaging modalities to an in vivo – in situ setting during surgical intervention, our recent research has also been focussed on novel biophotonics platforms that can be used for real-time tissue characterisation, functional assessment, and intraoperative guidance during minimally invasive surgery. This includes, for example, SMART confocal laser endomicroscopy, time-resolved fluorescence spectroscopy and flexible FLIM catheters.


Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Davids J, Makariou S-G, Ashrafian H, Darzi A, Marcus HJ, Giannarou Set al., 2021,

    Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation

    , WORLD NEUROSURGERY, Vol: 149, Pages: E669-E686, ISSN: 1878-8750
  • Journal article
    Dryden SD, Anastasova S, Satta G, Thompson AJ, Leff DR, Darzi Aet al., 2021,

    Rapid uropathogen identification using surface enhanced Raman spectroscopy active filters.

    , Scientific Reports, Vol: 11, Pages: 1-10, ISSN: 2045-2322

    Urinary tract infection is one of the most common bacterial infections leading to increased morbidity, mortality and societal costs. Current diagnostics exacerbate this problem due to an inability to provide timely pathogen identification. Surface enhanced Raman spectroscopy (SERS) has the potential to overcome these issues by providing immediate bacterial classification. To date, achieving accurate classification has required technically complicated processes to capture pathogens, which has precluded the integration of SERS into rapid diagnostics. This work demonstrates that gold-coated membrane filters capture and aggregate bacteria, separating them from urine, while also providing Raman signal enhancement. An optimal gold coating thickness of 50 nm was demonstrated, and the diagnostic performance of the SERS-active filters was assessed using phantom urine infection samples at clinically relevant concentrations (105 CFU/ml). Infected and uninfected (control) samples were identified with an accuracy of 91.1%. Amongst infected samples only, classification of three bacteria (Escherichia coli, Enterococcus faecalis, Klebsiella pneumoniae) was achieved at a rate of 91.6%.

  • Journal article
    Tukra S, Marcus HJ, Giannarou S, 2021,

    See-Through Vision with Unsupervised Scene Occlusion Reconstruction.

    , IEEE Trans Pattern Anal Mach Intell, Vol: PP

    Among the greatest of the challenges of Minimally Invasive Surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines perceptual, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.

  • Journal article
    Cartucho J, Tukra S, Li Y, S Elson D, Giannarou Set al., 2021,

    VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery

    , Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Vol: 9, Pages: 331-338, ISSN: 2168-1163

    Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. The main problem, however, is that the training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets which are challenging to obtain, since they require expensive hardware, ethical approvals, patient consent and access to hospitals. This paper presents VisionBlender, a solution to efficiently generate large and accurate endoscopic datasets for validating surgical vision algorithms. VisionBlender is a synthetic dataset generator that adds a user interface to Blender, allowing users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters. VisionBlender was built with special focus on robotic surgery, and examples of endoscopic data that can be generated using this tool are presented. Possible applications are also discussed, and here we present one of those applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms. Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.

  • Journal article
    Kedrzycki MS, Elson DS, Leff DR, 2020,

    ASO author reflections: fluorescence-guided sentinel node biopsy for breast cancer

    , Annals of Surgical Oncology, Vol: 28, Pages: 3749-3750, ISSN: 1068-9265

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://www.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=757&limit=5&page=3&respub-action=search.html Current Millis: 1686324985153 Current Time: Fri Jun 09 16:36:25 BST 2023