A Novel AI Framework of Image Reconstruction for Minimally Invasive Surgery

by

See-Through Vision with Unsupervised Scene Occlusion Reconstruction

Our Hamlyn researchers proposed a new AI unsupervised deep learning framework for image reconstruction, aiming to assist minimally invasive surgery.

Minimally Invasive Surgery (MIS) offers several advantages over traditional open surgery including less postoperative pain, fewer wound complications and reduced hospitalisation.

However, it comprises of multiple challenges, among the greatest of the challenges of MIS is the inadequate visualisation of the surgical field through keyhole incisions. 

Moreover, occlusions caused by instruments or bleeding can completely obfuscate anatomical landmarks, reduce surgical vision and lead to iatrogenic injury.

A Novel AI Unsupervised End-to-end Deep Learning Framework for Image Reconstruction

Overview of the proposed generator G model
Overview of the proposed generator G model. Solid lines are forward propagation and dotted lines are skip connections.

To solve this problem, our research team at the Hamlyn Centre proposed a new unsupervised end-to-end deep learning framework, based on Fully Convolutional Neural (FCN) networks to reconstruct the view of the surgical scene under occlusions and provide the surgeon with intra-operative see-through vision in these areas.

A novel generative densely connected encoder-decoder architecture has been designed which enables the incorporation of temporal information by introducing a new type of 3D convolution, the so called 3D partial convolution, to enhance the learning capabilities of the network and fuse temporal and spatial information.

encoder-decoder architecture
The generator G model. In the encoder stream, the red, green and yellow blocks denote convolution filters of size 7x7x3, 5x5x3 and 3x3x3, respectively. The blue blocks between the skip connections of stream1 are bottle neck layers. In the decoder, all convolution filters are 2D.

The dense connections within the network allow the decoder to utilise all the information gathered from each layer to reconstruct and refine the output in a hierarchical manner.

To train the proposed framework, a unique loss function has been proposed which combines feature matching, reconstruction, style, temporal and adversarial loss terms, for generating high fidelity image reconstructions. Combining multiple losses enables the network to capture vital features essential for reconstruction.

Occlusion reconstruction for MIS sequences
Occlusion reconstruction for MIS sequences. Top row is the original video broken down into sequential images and the bottom row is the same image regenerated by this proposed AI model with the object removed.

This proposed method can reconstruct the underlying view obstructed by irregularly shaped occlusions of divergent size, location and orientation, and has been validated on in-vivo MIS video data, as well as natural scenes on a range of occlusion-to-image (OIR) ratios.

Occlusion recovery on natural scenes
Occlusion recovery on natural scenes




Furthermore, through the comparative analysis with the state-of-the-art video in-painting models, the result of the evaluation assessment has verified the superiority of this proposed method and its potential clinical value.

Our research team is planing to further focus on making the proposed model real time for online occlusion removal and tackle other challenging occlusions in surgery, such as smoke and blood.

FIND OUT MORE >>


Dr. Giannarou and Mr. Tukra are supported by the Royal Society (UF140290 and RGF\EA\180084) and the NIHR Imperial Biomedical Research Centre (BRC). Mr. Marcus is supported by the NIHR University College London (UCL) Biomedical Research Centre (BRC) and the Wellcome/EPSRC Centre for Interventional and Surgical Sciences (WEISS). This research 'See-Through Vision with Unsupervised Scene Occlusion Reconstruction' was published on IEEE Transactions on Pattern Analysis and Machine Intelligence10 February 2021.

Supporters

Reporter

Erh-Ya (Asa) Tsui

Erh-Ya (Asa) Tsui
Enterprise

Click to expand or contract

Contact details

Tel: +44 (0)20 7594 8783
Email: e.tsui@imperial.ac.uk

Show all stories by this author

Tags:

Surgery, Imaging, Global-challenges-Engineering, Artificial-intelligence, Research
See more tags