The Centre has a long history of developing new techniques for medical imaging (particularly in magnetic resonance imaging), transforming them from a primarily diagnostic modality into an interventional and therapeutic platform. This is facilitated by the Centre's strong engineering background in practical imaging and image analysis platform development, as well as advances in minimal access and robotic assisted surgery. Hamlyn has a strong tradition in pursuing basic sciences and theoretical research, with a clear focus on clinical translation.

In response to the current paradigm shift and clinical demand in bringing cellular and molecular imaging modalities to an in vivo – in situ setting during surgical intervention, our recent research has also been focussed on novel biophotonics platforms that can be used for real-time tissue characterisation, functional assessment, and intraoperative guidance during minimally invasive surgery. This includes, for example, SMART confocal laser endomicroscopy, time-resolved fluorescence spectroscopy and flexible FLIM catheters.


Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Journal article
    Davids J, Makariou S-G, Ashrafian H, Darzi A, Marcus HJ, Giannarou Set al., 2021,

    Automated Vision-Based Microsurgical Skill Analysis in Neurosurgery Using Deep Learning: Development and Preclinical Validation.

    , World Neurosurg, Vol: 149, Pages: e669-e686

    BACKGROUND/OBJECTIVE: Technical skill acquisition is an essential component of neurosurgical training. Educational theory suggests that optimal learning and improvement in performance depends on the provision of objective feedback. Therefore, the aim of this study was to develop a vision-based framework based on a novel representation of surgical tool motion and interactions capable of automated and objective assessment of microsurgical skill. METHODS: Videos were obtained from 1 expert, 6 intermediate, and 12 novice surgeons performing arachnoid dissection in a validated clinical model using a standard operating microscope. A mask region convolutional neural network framework was used to segment the tools present within the operative field in a recorded video frame. Tool motion analysis was achieved using novel triangulation metrics. Performance of the framework in classifying skill levels was evaluated using the area under the curve and accuracy. Objective measures of classifying the surgeons' skill level were also compared using the Mann-Whitney U test, and a value of P < 0.05 was considered statistically significant. RESULTS: The area under the curve was 0.977 and the accuracy was 84.21%. A number of differences were found, which included experts having a lower median dissector velocity (P = 0.0004; 190.38 ms-1 vs. 116.38 ms-1), and a smaller inter-tool tip distance (median 46.78 vs. 75.92; P = 0.0002) compared with novices. CONCLUSIONS: Automated and objective analysis of microsurgery is feasible using a mask region convolutional neural network, and a novel tool motion and interaction representation. This may support technical skills training and assessment in neurosurgery.

  • Journal article
    Tukra S, Marcus HJ, Giannarou S, 2021,

    See-Through Vision with Unsupervised Scene Occlusion Reconstruction.

    , IEEE Trans Pattern Anal Mach Intell, Vol: PP

    Among the greatest of the challenges of Minimally Invasive Surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines perceptual, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.

  • Journal article
    Cartucho J, Tukra S, Li Y, S Elson D, Giannarou Set al., 2020,

    VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery

    , Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Pages: 1-8, ISSN: 2168-1163

    Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. The main problem, however, is that the training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets which are challenging to obtain, since they require expensive hardware, ethical approvals, patient consent and access to hospitals. This paper presents VisionBlender, a solution to efficiently generate large and accurate endoscopic datasets for validating surgical vision algorithms. VisionBlender is a synthetic dataset generator that adds a user interface to Blender, allowing users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters. VisionBlender was built with special focus on robotic surgery, and examples of endoscopic data that can be generated using this tool are presented. Possible applications are also discussed, and here we present one of those applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms. Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.

  • Journal article
    Kedrzycki MS, Elson DS, Leff DR, 2020,

    ASO author reflections: fluorescence-guided sentinel node biopsy for breast cancer

    , Annals of Surgical Oncology, ISSN: 1068-9265
  • Journal article
    Davids J, Manivannan S, Darzi A, Giannarou S, Ashrafian H, Marcus HJet al., 2020,

    Simulation for skills training in neurosurgery: a systematic review, meta-analysis, and analysis of progressive scholarly acceptance.

    , Neurosurgical Review, ISSN: 0344-5607

    At a time of significant global unrest and uncertainty surrounding how the delivery of clinical training will unfold over the coming years, we offer a systematic review, meta-analysis, and bibliometric analysis of global studies showing the crucial role simulation will play in training. Our aim was to determine the types of simulators in use, their effectiveness in improving clinical skills, and whether we have reached a point of global acceptance. A PRISMA-guided global systematic review of the neurosurgical simulators available, a meta-analysis of their effectiveness, and an extended analysis of their progressive scholarly acceptance on studies meeting our inclusion criteria of simulation in neurosurgical education were performed. Improvement in procedural knowledge and technical skills was evaluated. Of the identified 7405 studies, 56 studies met the inclusion criteria, collectively reporting 50 simulator types ranging from cadaveric, low-fidelity, and part-task to virtual reality (VR) simulators. In all, 32 studies were included in the meta-analysis, including 7 randomised controlled trials. A random effects, ratio of means effects measure quantified statistically significant improvement in procedural knowledge by 50.2% (ES 0.502; CI 0.355; 0.649, p < 0.001), technical skill including accuracy by 32.5% (ES 0.325; CI - 0.482; - 0.167, p < 0.001), and speed by 25% (ES - 0.25, CI - 0.399; - 0.107, p < 0.001). The initial number of VR studies (n = 91) was approximately double the number of refining studies (n = 45) indicating it is yet to reach progressive scholarly acceptance. There is strong evidence for a beneficial impact of adopting simulation in the improvement of procedural knowledge and technical skill. We show a growing trend towards the adoption of neurosurgical simulators, although we have not fully gained progressive scholarly acceptance for VR-based simulation technologies in neurosurgical education.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=757&limit=5&respub-action=search.html Current Millis: 1621113149512 Current Time: Sat May 15 22:12:29 BST 2021