The Centre has a long history of developing new techniques for medical imaging (particularly in magnetic resonance imaging), transforming them from a primarily diagnostic modality into an interventional and therapeutic platform. This is facilitated by the Centre's strong engineering background in practical imaging and image analysis platform development, as well as advances in minimal access and robotic assisted surgery. Hamlyn has a strong tradition in pursuing basic sciences and theoretical research, with a clear focus on clinical translation.

In response to the current paradigm shift and clinical demand in bringing cellular and molecular imaging modalities to an in vivo – in situ setting during surgical intervention, our recent research has also been focussed on novel biophotonics platforms that can be used for real-time tissue characterisation, functional assessment, and intraoperative guidance during minimally invasive surgery. This includes, for example, SMART confocal laser endomicroscopy, time-resolved fluorescence spectroscopy and flexible FLIM catheters.


Citation

BibTex format

@article{Tukra:2021:10.1109/TPAMI.2021.3058410,
author = {Tukra, S and Marcus, HJ and Giannarou, S},
doi = {10.1109/TPAMI.2021.3058410},
journal = {IEEE Trans Pattern Anal Mach Intell},
title = {See-Through Vision with Unsupervised Scene Occlusion Reconstruction.},
url = {http://dx.doi.org/10.1109/TPAMI.2021.3058410},
volume = {PP},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Among the greatest of the challenges of Minimally Invasive Surgery (MIS) is the inadequate visualisation of the surgical field through keyhole incisions. Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury. The aim of this paper is to propose an unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intraoperative see-through vision in these areas. A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information. To train the proposed framework, a unique loss function has been proposed which combines perceptual, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Advancing the state-of-the-art, our method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation. The proposed method has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.
AU - Tukra,S
AU - Marcus,HJ
AU - Giannarou,S
DO - 10.1109/TPAMI.2021.3058410
PY - 2021///
TI - See-Through Vision with Unsupervised Scene Occlusion Reconstruction.
T2 - IEEE Trans Pattern Anal Mach Intell
UR - http://dx.doi.org/10.1109/TPAMI.2021.3058410
UR - https://www.ncbi.nlm.nih.gov/pubmed/33566758
VL - PP
ER -