# Professor Dan Elson

Faculty of MedicineDepartment of Surgery & Cancer

Professor of Surgical Imaging

//

### Contact

+44 (0)20 7594 1700daniel.elson

//

### Location

415 Bessemer BuildingBessemer BuildingSouth Kensington Campus

//

## Publications

Publication Type
Year
to

418 results found

Kedrzycki MS, Leiloglou M, Ashrafian H, Jiwa N, Thiruchelvam PTR, Elson DS, Leff DRet al., 2021, Meta-analysis comparing fluorescence imaging with radioisotope and blue dye-guided sentinel node identification for breast cancer surgery., Annals of Surgical Oncology, Vol: 28, Pages: 3738-3748, ISSN: 1068-9265

INTRODUCTION: Conventional methods for axillary sentinel lymph node biopsy (SLNB) are fraught with complications such as allergic reactions, skin tattooing, radiation, and limitations on infrastructure. A novel technique has been developed for lymphatic mapping utilizing fluorescence imaging. This meta-analysis aims to compare the gold standard blue dye and radioisotope (BD-RI) technique with fluorescence-guided SLNB using indocyanine green (ICG). METHODS: This study was registered with PROSPERO (CRD42019129224). The MEDLINE, EMBASE, Scopus, and Web of Science databases were searched using the Medical Subject Heading (MESH) terms 'Surgery' AND 'Lymph node' AND 'Near infrared fluorescence' AND 'Indocyanine green'. Studies containing raw data on the sentinel node identification rate in breast cancer surgery were included. A heterogeneity test (using Cochran's Q) determined the use of fixed- or random-effects models for pooled odds ratios (OR). RESULTS: Overall, 1748 studies were screened, of which 10 met the inclusion criteria for meta-analysis. ICG was equivalent to radioisotope (RI) at sentinel node identification (OR 2.58, 95% confidence interval [CI] 0.35-19.08, p < 0.05) but superior to blue dye (BD) (OR 9.07, 95% CI 6.73-12.23, p < 0.05). Furthermore, ICG was superior to the gold standard BD-RI technique (OR 4.22, 95% CI 2.17-8.20, p < 0.001). CONCLUSION: Fluorescence imaging for axillary sentinel node identification with ICG is equivalent to the single technique using RI, and superior to the dual technique (RI-BD) and single technique with BD. Hospitals using RI and/or BD could consider changing their practice to ICG given the comparable efficacy and improved safety profile, as well as the lesser burden on hospital infrastructure.

Journal article

Lin J, Clancy NT, Qi J, Hu Y, Tatla T, Stoyanov D, Maier-Hein L, Elson DSet al., 2021, Corrigendum to Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks [Medical Image Analysis 48 (2018) 162-176/2018.06.004]., Medical Image Analysis, Vol: 72, Pages: 1-1, ISSN: 1361-8415

The first version of this article neglected to mention that this work was additionally supported by ERC award 637960. This has now been corrected online. The authors would like to apologise for any inconvenience caused.

Journal article

Leiloglou M, Chalau V, Kedrzycki MS, Thiruchelvam P, Darzi A, Leff DR, Elson DSet al., 2021, Tissue texture extraction in indocyanine green fluorescence imaging for breast-conserving surgery, Journal of Physics D: Applied Physics, Vol: 54, ISSN: 0022-3727

A two-camera fluorescence system for indocyanine green (ICG) signal detection has been developed and tested in a clinical feasibility trial of ten patients, with a resolution in the submillimetre scale. Immediately after systemic ICG injection, the two-camera system can detect ICG signals in vivo (~2.5 mg ${{\text{l}}^{ - 1}}$ or 3.2 × ${10^{ - 6}}{ }$ M). Qualitative assessment has shown that the fluorescence signal does not always correlate with the cancer location in the surgical scene. Conversely, fluorescence image texture metrics when used with the logistic regression model yields good accuracy scores in detecting cancer. We have demonstrated that intraoperative fluorescence imaging for resection guidance is a feasible solution to tackle the current challenge of positive resection margins in breast conserving surgery.

Journal article

Kedrzycki MS, Leiloglou M, Chalau V, Lin J, Thiruchelvam PTR, Elson DS, Leff DRet al., 2021, Guiding light to optimize wide local excisions: the "GLOW" study, Volume XXII 2021 Annual Meeting Scientific Session, Publisher: Springer, Pages: S199-S200, ISSN: 1068-9265

Conference paper

Kedrzycki M, Leiloglou M, Leff D, Elson D, Chalau V, Thiruchelvam P, Darzi Aet al., 2021, Versatility in Fluorescence Guided Surgery with the GLOW Camera System, Surgical Life: The Journal of the Association of Surgeons of Great Britain and Ireland

Journal article

Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov Det al., 2021, Ethical implications of AI in robotic surgical training: A Delphi consensus statement, European Urology Focus, ISSN: 2405-4569

ContextAs the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.ObjectivesTo provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee.Evidence acquisitionThe project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement.Evidence synthesisThere was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI.ConclusionsUsing the Delphi methodology, we achieved international consensus among experts to develop and reach

Journal article

Cartucho J, Tukra S, Li Y, S Elson D, Giannarou Set al., 2020, VisionBlender: a tool to efficiently generate computer vision datasets for robotic surgery, Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization, Pages: 1-8, ISSN: 2168-1163

Surgical robots rely on robust and efficient computer vision algorithms to be able to intervene in real-time. The main problem, however, is that the training or testing of such algorithms, especially when using deep learning techniques, requires large endoscopic datasets which are challenging to obtain, since they require expensive hardware, ethical approvals, patient consent and access to hospitals. This paper presents VisionBlender, a solution to efficiently generate large and accurate endoscopic datasets for validating surgical vision algorithms. VisionBlender is a synthetic dataset generator that adds a user interface to Blender, allowing users to generate realistic video sequences with ground truth maps of depth, disparity, segmentation masks, surface normals, optical flow, object pose, and camera parameters. VisionBlender was built with special focus on robotic surgery, and examples of endoscopic data that can be generated using this tool are presented. Possible applications are also discussed, and here we present one of those applications where the generated data has been used to train and evaluate state-of-the-art 3D reconstruction algorithms. Being able to generate realistic endoscopic datasets efficiently, VisionBlender promises an exciting step forward in robotic surgery.

Journal article

Kedrzycki MS, Elson DS, Leff DR, 2020, ASO author reflections: fluorescence-guided sentinel node biopsy for breast cancer, Annals of Surgical Oncology, Vol: 28, Pages: 3749-3750, ISSN: 1068-9265

Journal article

Ahmad OF, Mori Y, Misawa M, Kudo S-E, Anderson JT, Bernal J, Berzin TM, Bisschops R, Byrne MF, Chen P-J, East J, Eelbode T, Elson DS, Gurudu S, Histace A, Karnes WE, Repici A, Singh R, Valdastri P, Wallace MB, Wang P, Stoyanov D, Lovat LBet al., 2020, Establishing key research questions for the implementation of artificial intelligence in colonoscopy - a modified Delphi method., Endoscopy, ISSN: 0013-726X

Background and Aims Artificial intelligence (AI) research in colonoscopy is progressing rapidly but widespread clinical implementation is not yet a reality. We aimed to identify the top implementation research priorities. Methods An established modified Delphi approach for research priority setting was used. Fifteen international experts, including endoscopists and translational computer scientists/engineers from 9 countries participated in an online survey over 9 months. Questions related to AI implementation in colonoscopy were generated as a long-list in the first round, and then scored in two subsequent rounds to identify the top 10 research questions. Results The top 10 ranked questions were categorised into 5 themes. Theme 1: Clinical trial design/end points (4 questions), related to optimum trial designs for polyp detection and characterisation, determining the optimal end-points for evaluation of AI and demonstrating impact on interval cancer rates. Theme 2: Technological Developments (3 questions), including improving detection of more challenging and advanced lesions, reduction of false positive rates and minimising latency. Theme 3: Clinical adoption/Integration (1 question) concerning effective combination of detection and characterisation into one workflow. Theme 4: Data access/annotation (1 question) concerning more efficient or automated data annotation methods to reduce the burden on human experts. Theme 5: Regulatory Approval (1 question) related to making regulatory approval processes more efficient. Conclusions This is the first reported international research priority setting exercise for AI in colonoscopy. The study findings should be used as a framework to guide future research with key stakeholders to accelerate the clinical implementation of AI in endoscopy.

Journal article

Huang B, Tsai Y-Y, Cartucho J, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2020, Tracking and visualization of the sensing area for a tethered laparoscopic gamma probe, International Journal of Computer Assisted Radiology and Surgery, Vol: 15, Pages: 1389-1397, ISSN: 1861-6410

PurposeIn surgical oncology, complete cancer resection and lymph node identification are challenging due to the lack of reliable intraoperative visualization. Recently, endoscopic radio-guided cancer resection has been introduced where a novel tethered laparoscopic gamma detector can be used to determine the location of tracer activity, which can complement preoperative nuclear imaging data and endoscopic imaging. However, these probes do not clearly indicate where on the tissue surface the activity originates, making localization of pathological sites difficult and increasing the mental workload of the surgeons. Therefore, a robust real-time gamma probe tracking system integrated with augmented reality is proposed.MethodsA dual-pattern marker has been attached to the gamma probe, which combines chessboard vertices and circular dots for higher detection accuracy. Both patterns are detected simultaneously based on blob detection and the pixel intensity-based vertices detector and used to estimate the pose of the probe. Temporal information is incorporated into the framework to reduce tracking failure. Furthermore, we utilized the 3D point cloud generated from structure from motion to find the intersection between the probe axis and the tissue surface. When presented as an augmented image, this can provide visual feedback to the surgeons.ResultsThe method has been validated with ground truth probe pose data generated using the OptiTrack system. When detecting the orientation of the pose using circular dots and chessboard dots alone, the mean error obtained is 0.05∘and 0.06∘, respectively. As for the translation, the mean error for each pattern is 1.78 mm and 1.81 mm. The detection limits for pitch, roll and yaw are 360∘,360∘ and 8∘–82∘∪188∘–352∘.ConclusionThe performance evaluation results show that this dual-pattern marker can provide high detection rates, as well as more accurate pose estimation and a larger workspace than the previously proposed hyb

Journal article

Kedrzycki M, Leiloglou M, Ashrafian H, Jiwa N, Thiruchelvam P, Elson D, Leff Det al., 2020, Meta-analysis of Sentinel Node Mapping Techniques Comparing Near-infrared Fluorescence Imaging to Blue Dye and Radioisotope, 21st Annual Meeting of the American-Society-of-Breast-Surgeons (ASBS), Publisher: SPRINGER, Pages: S535-S536, ISSN: 1068-9265

Conference paper

Kedrzycki M, Leiloglou M, Chalau V, Thiruchelvam P, Lin J, Elson D, Leff Det al., 2020, First-in-Human Study Using the 'GLOW' Near Infrared Camera System in Breast Cancer, 21st Annual Meeting of the American-Society-of-Breast-Surgeons (ASBS), Publisher: SPRINGER, Pages: S382-S383, ISSN: 1068-9265

Conference paper

Clancy NT, Jones G, Maier-Hein L, Elson DS, Stoyanov Det al., 2020, Surgical spectral imaging., Medical Image Analysis, Vol: 63, Pages: 1-18, ISSN: 1361-8415

Recent technological developments have resulted in the availability of miniaturised spectral imaging sensors capable of operating in the multi- (MSI) and hyperspectral imaging (HSI) regimes. Simultaneous advances in image-processing techniques and artificial intelligence (AI), especially in machine learning and deep learning, have made these data-rich modalities highly attractive as a means of extracting biological information non-destructively. Surgery in particular is poised to benefit from this, as spectrally-resolved tissue optical properties can offer enhanced contrast as well as diagnostic and guidance information during interventions. This is particularly relevant for procedures where inherent contrast is low under standard white light visualisation. This review summarises recent work in surgical spectral imaging (SSI) techniques, taken from Pubmed, Google Scholar and arXiv searches spanning the period 2013-2019. New hardware, optimised for use in both open and minimally-invasive surgery (MIS), is described, and recent commercial activity is summarised. Computational approaches to extract spectral information from conventional colour images are reviewed, as tip-mounted cameras become more commonplace in MIS. Model-based and machine learning methods of data analysis are discussed in addition to simulation, phantom and clinical validation experiments. A wide variety of surgical pilot studies are reported but it is apparent that further work is needed to quantify the clinical value of MSI/HSI. The current trend toward data-driven analysis emphasises the importance of widely-available, standardised spectral imaging datasets, which will aid understanding of variability across organs and patients, and drive clinical translation.

Journal article

Zhao M, Oude Vrielink TJC, Kogkas A, Runciman M, Elson D, Mylonas Get al., 2020, LaryngoTORS: a novel cable-driven parallel robotic system for transoral laser phonosurgery, IEEE Robotics and Automation Letters, Vol: 5, Pages: 1516-1523, ISSN: 2377-3766

Transoral laser phonosurgery is a commonly used surgical procedure in which a laser beam is used to perform incision, ablation or photocoagulation of laryngeal tissues. Two techniques are commonly practiced: free beam and fiber delivery. For free beam delivery, a laser scanner is integrated into a surgical microscope to provide an accurate laser scanning pattern. This approach can only be used under direct line of sight, which may cause increased postoperative pain to the patient and injury, is uncomfortable for the surgeon during prolonged operations, the manipulability is poor and extensive training is required. In contrast, in the fiber delivery technique, a flexible fiber is used to transmit the laser beam and therefore does not require direct line of sight. However, this can only achieve manual level accuracy, repeatability and velocity, and does not allow for pattern scanning. Robotic systems have been developed to overcome the limitations of both techniques. However, these systems offer limited workspace and degrees-of-freedom (DoF), limiting their clinical applicability. This work presents the LaryngoTORS, a robotic system that aims at overcoming the limitations of the two techniques, by using a cable-driven parallel mechanism (CDPM) attached at the end of a curved laryngeal blade for controlling the end tip of the laser fiber. The system allows autonomous generation of scanning patterns or user driven freepath scanning. Path scan validation demonstrated errors as low as 0.054±0.028 mm and high repeatability of 0.027±0.020 mm (6×2 mm arc line). Ex vivo tests on chicken tissue have been carried out. The results show the ability of the system to overcome limitations of current methods with high accuracy and repeatability using the superior fiber delivery approach.

Journal article

He C, Chang J, He H, Liu S, Elson DS, Ma H, Booth MJet al., 2020, GRIN lens based polarization endoscope – from conception to application, Label-free Biomedical Imaging and Sensing (LBIS) 2020, Publisher: SPIE

Graded index (GRIN) lenses focus light through a radially symmetric refractive index profile. It is not widely appreciated that the ion-exchange process that creates the index profile also causes a radially symmetric birefringence variation. This property is usually considered a nuisance, such that manufacturing processes are optimized to keep it to a minimum. Here, a new Mueller matrix (MM) polarimeter based on a spatially engineered polarization state generating array and GRIN lens cascade for measuring the MM of a region of a sample in a single-shot is presented. We explore using the GRIN lens cascade for a functional analyzer to calculate multiple Stokes vectors and the MM of the target in a snapshot. A designed validation sample is used to test the reliability of this polarimeter. To understand more potential biomedical applications, human breast ductal carcinoma slides at two pathological progression stages are detected by this polarimeter. The MM polar decomposition parameters then can be calculated from the measured MMs, and quantitatively compared with the equivalent data sampled by a MM microscope. The results indicate that the polarimeter and the measured polarization parameters are capable of differentiating the healthy and carcinoma status of human breast tissue efficiently. It has potential to act as a polarization detected fiber-based probe to assist further minimally invasive clinical diagnosis.

Conference paper

Leiloglou M, Gkouzionis I, Avila-Rencoret FB, Chalau V, Kedrzycki M, Darzi A, Leff DR, Elson DSet al., 2020, Snapshot hyperspectral system for breast conserving surgery guidance

There is an unmet need for accurate tumour localization in vivo during breast conserving surgery. Herein a novel snapshot hyperspectral system is presented for accurately detecting the intrinsic fluorescence signal in real-time fluorescence guided surgery.

Conference paper

Elson D, 2020, Multispectral and polarization-resolved endoscopic surgical imaging (invited), SPIE Photonics Europe

Conference paper

Gkouzionis IA, Avila-Rencoret F, Peters C, Elson Det al., 2020, Hyperspectral circumferential resection margin assessment for gastrointestinal cancer surgery, Biophotonics and Imaging Graduate Summer School 2020

Conference paper

Kedrzycki M, Leiloglou M, Leff D, Elson Det al., 2020, Illuminating Cancer: A Systematic Review of Fluorophores available for Fluorescence Guided Surgery in Humans, European Molecular Imaging Meeting

Conference paper

Zhao M, Oude Vrielink J, Kogkas A, Runciman M, Elson D, Mylonas Get al., 2020, LaryngoTORS: A Novel Cable-Driven Parallel Robot for Transoral Laser Surgery, International Conference on Robotics and Automation (ICRA)

Conference paper

Zhao M, Zhang H, Mylonas GP, Elson DSet al., 2020, Cable-driven parallel robot assisted confocal imaging of the larynx

LaryngoTORS, a transoral laryngeal surgery robot, can manipulate instruments accurately. Confocal imaging has potentials in laryngeal cancer diagnosis but suffer from high scanning requirement. This work studies using LaryngoTORS to assist confocal imaging of larynx.

Conference paper

Kedrzycki M, Leiloglou M, Chalau V, Lin J, Thiruchelvam P, Leff D, Elson Det al., 2020, Fluorescence Guided Surgery Using Indocyanine Green to Demarcate Vasculature in Breast Cancer, London Surgery Symposium

Conference paper

Palaniappan V, Noble E, Qi J, Lewis J, Sahid SM, Reese G, Paraskeva P, Souvatzi, Stoyanov D, Murphy J, Elson Det al., 2020, Optical Polarization-resolved Imaging of Human Colon Cancer Tissue, London Surgery Symposium

Conference paper

Cartucho J, Tukra S, Li Y, Elson D, Giannarou Set al., 2020, VisionBlender: A Tool for Generating Computer Vision Datasets in Robotic Surgery (best paper award), Joint MICCAI 2020 Workshop on Augmented Environments for Computer-Assisted Interventions (AE-CAI), Computer-Assisted Endoscopy (CARE) and Context-Aware Operating Theatres 2.0 (OR2.0)

Conference paper

Elson D, 2020, Multispectral and polarization-resolved endoscopic surgical imaging (invited), Photon 2020

Conference paper

Huang B, Tsai Y-Y, Cartucho J, Tuch D, Giannarou S, Elson Det al., 2020, Tracking and Visualization of the Sensing Area for a Tethered Laparoscopic Gamma Probe, Information Processing in Computer Assisted Intervention (IPCAI)

Conference paper

He C, Chang J, Hu Q, Wang J, Antonello J, He H, Liu S, Lin J, Dai B, Elson DS, Xi P, Ma H, Booth MJet al., 2019, Complex vectorial optics through gradient index lens cascades, Nature Communications, Vol: 10, Pages: 1-8, ISSN: 2041-1723

Graded index (GRIN) lenses are commonly used for compact imaging systems. It is not widely appreciated that the ion-exchange process that creates the rotationally symmetric GRIN lens index profile also causes a symmetric birefringence variation. This property is usually considered a nuisance, such that manufacturing processes are optimized to keep it to a minimum. Here, rather than avoiding this birefringence, we understand and harness it by using GRIN lenses in cascade with other optical components to enable extra functionality in commonplace GRIN lens systems. We show how birefringence in the GRIN cascades can generate vector vortex beams and foci, and how it can be used advantageously to improve axial resolution. Through using the birefringence for analysis, we show that the GRIN cascades form the basis of a new single-shot Müller matrix polarimeter with potential for endoscopic label-free cancer diagnostics. The versatility of these cascades opens up new technological directions.

Journal article

Chabloz N, Wenzel M, Perry H, Yoon I, Molisso S, Stasiuk G, Elson D, Cass A, Wilton-Ely Jet al., 2019, Polyfunctionalised nanoparticles bearing robust gadolinium surface units for high relaxivity performance in MRI, Chemistry - A European Journal, Vol: 25, Pages: 10895-10906, ISSN: 0947-6539

The first example of an octadentate gadolinium unit based on DO3A (hydration number q = 1) with a dithiocarbamate tether has been designed and attached to the surface of gold nanoparticles (around 4.4 nm in diameter). In addition to the superior robustness of this attachment, the restricted rotation of the Gd complex on the nanoparticle surface leads to a dramatic increase in relaxivity (r1) from 4.0 mM‐1 s‐1 in unbound form to 34.3 mM‐1 s‐1 (at 10 MHz, 37 °C) and 22 ± 2 mM‐1s‐1 (at 63.87 MHz, 25 °C) when immobilised on the surface. The ‘one‐pot’ synthetic route provides a straightforward and versatile way of preparing a range of multifunctional gold nanoparticles. The incorporation of additional surface units improving biocompatibility (PEG and thioglucose units) and targeting (folic acid) lead to little detrimental effect on the high relaxivity observed for these non‐toxic multifunctional materials. In addition to the passive targeting attributed to gold nanoparticles, the inclusion of a unit capable of targeting the folate receptors overexpressed by cancer cells, such as HeLa cells, illustrates the potential of these assemblies.

Journal article

Li Q, Lin J, Clancy NT, Elson DSet al., 2019, Estimation of tissue oxygen saturation from RGB images and sparse hyperspectral signals based on conditional generative adversarial network, International Journal of Computer Assisted Radiology and Surgery, Vol: 14, Pages: 987-995, ISSN: 1861-6410

Purpose: Intra-operative measurement of tissue oxygen saturation (StO 2 ) is important in detection of ischaemia, monitoring perfusion and identifying disease. Hyperspectral imaging (HSI) measures the optical reflectance spectrum of the tissue and uses this information to quantify its composition, including StO 2 . However, real-time monitoring is difficult due to capture rate and data processing time. Methods: An endoscopic system based on a multi-fibre probe was previously developed to sparsely capture HSI data (sHSI). These were combined with RGB images, via a deep neural network, to generate high-resolution hypercubes and calculate StO 2 . To improve accuracy and processing speed, we propose a dual-input conditional generative adversarial network, Dual2StO2, to directly estimate StO 2 by fusing features from both RGB and sHSI. Results: Validation experiments were carried out on in vivo porcine bowel data, where the ground truth StO 2 was generated from the HSI camera. Performance was also compared to our previous super-spectral-resolution network, SSRNet in terms of mean StO 2 prediction accuracy and structural similarity metrics. Dual2StO2 was also tested using simulated probe data with varying fibre number. Conclusions: StO 2 estimation by Dual2StO2 is visually closer to ground truth in general structure and achieves higher prediction accuracy and faster processing speed than SSRNet. Simulations showed that results improved when a greater number of fibres are used in the probe. Future work will include refinement of the network architecture, hardware optimization based on simulation results, and evaluation of the technique in clinical applications beyond StO 2 estimation.

Journal article

Brunckhorst O, Ong QJ, Elson D, Mayer Eet al., 2019, Novel real-time optical imaging modalities for the detection of neoplastic lesions in urology: a systematic review, Surgical Endoscopy, Vol: 33, Pages: 1349-1367, ISSN: 0930-2794

Background Current optical diagnostic techniques for malignancies are limited in their diagnostic accuracy and lack theability to further characterise disease, leading to the rapidly increasing development of novel imaging methods within urology. This systematic review critically appraises the literature for novel imagining modalities, in the detection and staging ofurological cancer and assesses their effectiveness via their utility and accuracy.Methods A systematic literature search utilising MEDLINE, EMBASE and Cochrane Library Database was conducted from1970 to September 2018 by two independent reviewers. Studies were included if they assessed real-time imaging modalities not already approved in guidelines, in vivo and in humans. Outcome measures included diagnostic accuracy and utilityparameters, including feasibility and cost.Results Of 5475 articles identified from screening, a final 46 were included. Imaging modalities for bladder cancer includedoptical coherence tomography (OCT), confocal laser endomicroscopy, autofluorescence and spectroscopic techniques. OCTwas the most widely investigated, with 12 studies demonstrating improvements in overall diagnostic accuracy (sensitivity74.5–100% and specificity 60–98.5%). Upper urinary tract malignancy diagnosis was assessed using photodynamic diagnosis(PDD), narrow band imaging, optical coherence tomography and confocal laser endomicroscopy. Only PDD demonstratedconsistent improvements in overall diagnostic accuracy in five trials (sensitivity 94–96% and specificity 96.6–100%). Limitedevidence for optical coherence tomography in percutaneous renal biopsy was identified, with anecdotal evidence for anymodality in penile cancer.Conclusions Evidence supporting the efficacy for identified novel imaging modalities remains limited at present. However,OCT for bladder cancer and PDD in upper tract malignancy demonstrate the best potential for improvement in overall diagnostic accuracy. OCT may addit

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00302438&limit=30&person=true