Imperial College London

Professor Dan Elson

Faculty of MedicineDepartment of Surgery & Cancer

Professor of Surgical Imaging
 
 
 
//

Contact

 

+44 (0)20 7594 1700daniel.elson Website

 
 
//

Location

 

415 Bessemer BuildingBessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

451 results found

Nazarian S, Gkouzionis I, Kawka M, Jamroziak M, Lloyd J, Darzi A, Patel N, Elson DS, Peters CJet al., 2022, Real-time tracking and classification of tumour and non-tumour tissue in upper gastrointestinal cancers using diffuse reflectance spectroscopy for resection margin assessment, JAMA Surgery, ISSN: 2168-6254

Importance:Cancers of the upper gastrointestinal tract remain a major contributor to the global cancer burden. The accurate mapping of tumour margins is of particular importance for curative cancer resection and improvement in overall survival. Current mapping techniques preclude a full resection margin assessment in real-time.Objective:We aimed to use diffuse reflectance spectroscopy on gastric and oesophageal cancer specimens to differentiate tissue types and provide real-time feedback to the operator.Design:This was a prospective ex vivo validation study. Patients undergoing oesophageal or gastric cancer resection were prospectively recruited into the study between July 2020 and July 2021 at Hammersmith Hospital in London, United Kingdom.Setting:This was a single-centre study based at a tertiary hospital.Participants:Tissue specimens were included for patients undergoing elective surgery for either oesophageal carcinoma (adenocarcinoma or squamous cell carcinoma) or gastric adenocarcinoma.Exposure:A hand-held diffuse reflectance spectroscopy probe and tracking system was used on freshly resected ex vivo tissue to obtain spectral data. Binary classification, following histopathological validation, was performed using four supervised machine learning classifiers. Main Outcomes and Measures:Data were divided into training and testing sets using a stratified 5-fold cross-validation method. Machine learning classifiers were evaluated in terms of sensitivity, specificity, overall accuracy, and the area under the curve.Results:A total of 14,097 mean spectra for normal and cancerous tissue were collected from 37 patients. The machine learning classifier achieved an overall normal versus cancer diagnostic accuracy of 93.86±0.66 for stomach tissue and 96.22±0.50 for oesophageal tissue, and sensitivity and specificity of 91.31% and 95.13% for stomach and 94.60% and 97.28% for oesophagus, respectively. Real-time tissue tracking and classification was achieved a

Journal article

Leiloglou M, Kedrzycki M, Chalau V, Chiarini N, Thiruchelvam P, Hadjiminas D, Hogben K, Rashid F, Ramakrishnan R, Darzi A, Leff D, Elson Det al., 2022, Indocyanine green fluorescence image processing techniques for breast cancer macroscopic demarcation, Scientific Reports, Vol: 12, ISSN: 2045-2322

Re-operation due to disease being inadvertently close to the resection margin is a major challenge in breast conserving surgery (BCS). Indocyanine green (ICG) fluorescence imaging could be used to visualize the tumor boundaries and help surgeons resect disease more efficiently. In this work, ICG fluorescence and color images were acquired with a custom-built camera system from 40 patients treated with BCS. Images were acquired from the tumor in-situ, surgical cavity post-excision, freshly excised tumor and histopathology tumour grossing. Fluorescence image intensity and texture were used as individual or combined predictors in both logistic regression (LR) and support vector machine models to predict the tumor extent. ICG fluorescence spectra in formalin-fixed histopathology grossing tumor were acquired and analyzed. Our results showed that ICG remains in the tissue after formalin fixation. Therefore, tissue imaging could be validated in freshly excised and in formalin-fixed grossing tumor. The trained LR model with combined fluorescence intensity (pixel values) and texture (slope of power spectral density curve) identified the tumor’s extent in the grossing images with pixel-level resolution and sensitivity, specificity of 0.75 ± 0.3, 0.89 ± 0.2.This model was applied on tumor in-situ and surgical cavity (post-excision) images to predict tumor presence.

Journal article

Xu C, Huang B, Elson DS, 2022, Self-Supervised Monocular Depth Estimation With 3-D Displacement Module for Laparoscopic Images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 331-334

Journal article

Huang B, Nguyen A, Wang S, Wang Z, Mayer E, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2022, Simultaneous depth estimation and surgical tool segmentation in laparoscopic images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 335-338, ISSN: 2576-3202

Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.

Journal article

Kong L, Evans C, Su L, Elson DS, Wei Xet al., 2022, Special issue on translational biophotonics, Journal of Physics D: Applied Physics, Vol: 55, ISSN: 0022-3727

This special issue on 'Translational Biophotonics' was initiated when COVID-19 started to spread worldwide in early 2020, with the aim of introducing the advances in optical tools that have the ability to transform clinical diagnostics, surgical guidance, and therapeutic approaches that together can have a profound impact on global health. This issue achieves this goal comprehensively, covering various topics including optical techniques for clinical diagnostics, monitoring and treatment, in addition to fundamental studies in biomedicine.

Journal article

Shen Y, Chen B, He C, He H, Guo J, Wu J, Elson DS, Ma Het al., 2022, Polarization Aberrations in High-Numerical-Aperture Lens Systems and Their Effects on Vectorial-Information Sensing, REMOTE SENSING, Vol: 14

Journal article

Wang D, Qi J, Huang B, Noble E, Stoyanov D, Gao J, Elson DSet al., 2022, Polarization-based smoke removal method for surgical images, Biomedical Optics Express, Vol: 13, Pages: 2364-2364, ISSN: 2156-7085

Smoke generated during surgery affects tissue visibility and degrades image quality, affecting surgical decisions and limiting further image processing and analysis. Polarization is a fundamental property of light and polarization-resolved imaging has been studied and applied to general visibility restoration scenarios such as for smog or mist removal or in underwater environments. However, there is no related research or application for surgical smoke removal. Due to differences between surgical smoke and general haze scenarios, we propose an alternative imaging degradation model by redefining the form of the transmission parameters. The analysis of the propagation of polarized light interacting with the mixed medium of smoke and tissue is proposed to realize polarization-based smoke removal (visibility restoration). Theoretical analysis and observation of experimental data shows that the cross-polarized channel data generated by multiple scattering is less affected by smoke compared to the co-polarized channel. The polarization difference calculation for different color channels can estimate the model transmission parameters and reconstruct the image with restored visibility. Qualitative and quantitative comparison with alternative methods show that the polarization-based image smoke-removal method can effectively reduce the degradation of biomedical images caused by surgical smoke and partially restore the original degree of polarization of the samples.

Journal article

Shanthakumar D, Elson D, Darzi A, Leff Det al., 2022, Tissue optical imaging as an emerging technique for intraoperative margin assessment in breast-conserving surgery, Publisher: SPRINGER, Pages: 153-154, ISSN: 1068-9265

Conference paper

Han J, Davids J, Ashrafian H, Darzi A, Elson DS, Sodergren Met al., 2022, A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches, International Journal of Medical Robotics and Computer Assisted Surgery, Vol: 18, Pages: 1-11, ISSN: 1478-5951

BackgroundFrom traditional open surgery to laparoscopic surgery and robot-assisted surgery, advances in robotics, machine learning, and imaging are pushing the surgical approach to-wards better clinical outcomes. Pre-clinical and clinical evidence suggests that automation may standardise techniques, increase efficiency, and reduce clinical complications.MethodsA PRISMA-guided search was conducted across PubMed and OVID.ResultsOf the 89 screened articles, 51 met the inclusion criteria, with 10 included in the final review. Automatic data segmentation, trajectory planning, intra-operative registration, trajectory drilling, and soft tissue robotic surgery were discussed.ConclusionAlthough automated surgical systems remain conceptual, several research groups have developed supervised autonomous robotic surgical systems with increasing consideration for ethico-legal issues for automation. Automation paves the way for precision surgery and improved safety and opens new possibilities for deploying more robust artificial intelligence models, better imaging modalities and robotics to improve clinical outcomes.

Journal article

He C, Chang J, Salter PS, Shen Y, Dai B, Li P, Jin Y, Thodika SC, Li M, Tariq A, Wang J, Antonello J, Dong Y, Qi J, Lin J, Elson DS, Zhang M, He H, Ma H, Booth MJet al., 2022, Revealing complex optical phenomena through vectorial metrics, Advanced Photonics Research, Vol: 4, Pages: 1-9, ISSN: 2699-9293

Advances in vectorial polarization-resolved imaging are bringing new capabilities to applications ranging from fundamental physics through to clinical diagnosis. Imaging polarimetry requires determination of the Mueller matrix (MM) at every point, providing a complete description of an object’s vectorial properties. Despite forming a comprehensive representation, the MM does not usually provide easily interpretable information about the object’s internal structure. Certain simpler vectorial metrics are derived from subsets of the MM elements. These metrics permit extraction of signatures that provide direct indicators of hidden optical properties of complex systems, while featuring an intriguing asymmetry about what information can or cannot be inferred via these metrics. We harness such characteristics to reveal the spin Hall effect of light, infer microscopic structure within laser-written photonic waveguides, and conduct rapid pathological diagnosis through analysis of healthy and cancerous tissue. This provides new insight for the broader usage of such asymmetric inferred vectorial information.

Journal article

Gkouzionis I, Nazarian S, Kawka M, Darzi A, Patel N, Peters C, Elson Det al., 2022, Real-time tracking of a diffuse reflectance spectroscopy probe used to aid histological validation of margin assessment in upper gastrointestinal cancer resection surgery, Journal of Biomedical Optics, Vol: 27, ISSN: 1083-3668

Significance: Diffuse reflectance spectroscopy (DRS) allows discrimination of tissue type. Its application is limited by the inability to mark the scanned tissue and the lack of real-time measurements.Aim: This study aimed to develop a real-time tracking system to enable localization of a DRS probe to aid the classification of tumor and non-tumor tissue.Approach: A green-colored marker attached to the DRS probe was detected using hue-saturation-value (HSV) segmentation. A live, augmented view of tracked optical biopsy sites was recorded in real time. Supervised classifiers were evaluated in terms of sensitivity, specificity, and overall accuracy. A developed software was used for data collection, processing, and statistical analysis.Results: The measured root mean square error (RMSE) of DRS probe tip tracking was 1.18  ±  0.58  mm and 1.05  ±  0.28  mm for the x and y dimensions, respectively. The diagnostic accuracy of the system to classify tumor and non-tumor tissue in real time was 94% for stomach and 96% for the esophagus.Conclusions: We have successfully developed a real-time tracking and classification system for a DRS probe. When used on stomach and esophageal tissue for tumor detection, the accuracy derived demonstrates the strength and clinical value of the technique to aid margin assessment in cancer resection surgery.

Journal article

Nazarian S, Gkouzionis I, Kawka M, Patel N, Darzi A, Elson D, Peters Cet al., 2021, Real-time tracking and classification of tumour and non-tumour tissue in upper gastrointestinal cancer specimens using diffuse reflectance spectroscopy, UGI Congress 2021, ISSN: 0007-1323

Conference paper

Gkouzionis I, Nazarian S, Anandakumar A, Darzi A, Patel N, Peters C, Elson DSet al., 2021, Using diffuse reflectance spectroscopy probe tracking to identify non-tumour and tumour tissue in upper gastrointestinal specimens, Translational Biophotonics: Diagnostics and Therapeutics, Publisher: SPIE

The use of a diffuse reflectance spectroscopy probe and tracking system was successfully used in real-time for automated tissue classification in upper gastrointestinal surgery to aid resection margin assessment.

Conference paper

Cartucho J, Wang C, Huang B, Elson DS, Darzi A, Giannarou Set al., 2021, An enhanced marker pattern that achieves improved accuracy in surgical tool tracking, Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization, Vol: 10, Pages: 1-9, ISSN: 2168-1163

In computer assisted interventions (CAI), surgical tool tracking is crucial for applications such as surgical navigation, surgical skill assessment, visual servoing, and augmented reality. Tracking of cylindrical surgical tools can be achieved by printing and attaching a marker to their shaft. However, the tracking error of existing cylindrical markers is still in the millimetre range, which is too large for applications such as neurosurgery requiring sub-millimetre accuracy. To achieve tool tracking with sub-millimetre accuracy, we designed an enhanced marker pattern, which is captured on images from a monocular laparoscopic camera. The images are used as input for a tracking method which is described in this paper. Our tracking method was compared to the state-of-the-art, on simulation and ex vivo experiments. This comparison shows that our method outperforms the current state-of-the-art. Our marker achieves a mean absolute error of 0.28 [mm] and 0.45 [°] on ex vivo data, and 0.47 [mm] and 1.46 [°] on simulation. Our tracking method is real-time and runs at 55 frames per second for 720×576 image resolution.

Journal article

Teh JJ, Cai W, Kedrzycki M, Thiruchelvam P, Leff D, Elson Det al., 2021, 392 Magseed-guided wide local excision during the COVID-19 pandemic: a tenable solution to barriers in accessing elective breast cancer surgery, Association of Surgeons in Training, Publisher: British Journal of Surgery Society, ISSN: 0007-1323

Conference paper

Perrott C, Patil A, Elson D, Peters Cet al., 2021, Novel methods of detecting tumour margins in gastrointestinal cancer surgery, 2021 Association of Surgeons in Training International Surgical Conference, Publisher: British Journal of Surgery Society, Pages: 1-1, ISSN: 0007-1323

AimGastrointestinal (GI) cancers account for 26% of global cancer incidence with prevalence projected to rise exponentially due to the ageing population and lifestyle choices. Surgical resection is the mainstay of treatment to remove the cancer in its entirety to achieve an R0 resection. Positive margins, when cancerous tissue has been left in situ, is associated with increased morbidity and mortality. Current margin assessment involves histopathological analysis, after resection of the specimen. Diffuse Reflectance Spectroscopy (DRS) and Hyperspectral Imaging (HSI) are novel imaging techniques that have the potential to provide real-time assessment of cancer margins intra-operatively to reduce the incidence of positive resection margins and improve patient outcomes. The aim of this review is to assess the current state of evidence for the use of novel imaging techniques in GI cancer margin assessment.MethodA literature review was conducted of studies using DRS and HSI in GI cancers in adult patients, published from inception to October 2020.ResultsA total of 15 studies were analysed, nine of which used DRS and six used HSI and the majority of studies were performed ex-vivo. Current image acquisition techniques and processing algorithms vary greatly. The sensitivity and specificity of DRS ranged from 0.90-0.98 and 0.88-0.95 respectively and for HSI 0.63-0.98 and 0.69-0.98, respectively across five types of GI cancers.ConclusionsDRS and HSI are novel imaging techniques, currently in their infancy but the outlook is promising. With further research focused on standardising methodology and in-vivo settings, DRS and HSI could transform intra-operative margin assessment in GI cancers.

Conference paper

Nazarian S, Gkouzionis I, Anandakumar A, Patel N, Elson D, Peters Cet al., 2021, Using diffuse reflectance spectroscopy (DRS) to identify tumour and non-tumour tissue in upper gastrointestinal specimens, Association of Surgeons of Great Britain and Ireland Virtual Congress, Publisher: British Journal of Surgery Society, Pages: 41-41, ISSN: 0007-1323

AimCancers of the upper gastrointestinal (GI) tract remain a major contributor to the global cancer risk. Surgery aims to completely resect tumour with clear margins, whilst preserving as much surrounding tissue. Diffuse reflectance spectroscopy (DRS) is a novel technique that presents a promising advancement in cancer diagnosis. We have developed a novel DRS system with tracking capability. Our aim is to classify tumour and non-tumour GI tissue in real-time using this device to aid intra-operative analysis of resection margins.MethodAn ex-vivo study was undertaken in which data was collected from consecutive patients undergoing upper GI cancer resection surgery between August 2020- January 2021. A hand-held DRS probe and tracking system was used on normal and cancerous tissue to obtain spectral information. After acquisition of all spectra, a classification system using histopathology results was created. A user interface was developed using Python 3.6 and Qt5. A support vector machine was used to classify the results.ResultsThe data included 4974 normal spectra and 2108 tumour spectra. The overall accuracy of the DRS probe in differentiating normal versus tumour tissue was 88.08% for the stomach (sensitivity 84.8%, specificity 89.3%), and 91.42% for the oesophagus (sensitivity 76.3%, specificity 98.9%).ConclusionWe have developed a successful DRS system with tracking capability, able to process thousands of spectra in a small timeframe, which can be used in real-time to distinguish tumour and non-tumour tissue. This can be used for intra-operative decision-making during upper GI cancer surgery to help select the best resection plane.

Conference paper

Huang B, Zheng J-Q, Nguyen A, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2021, Self-supervised generative adverrsarial network for depth estimation in laparoscopic images, International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: Springer, Pages: 227-237

Dense depth estimation and 3D reconstruction of a surgical scene are crucial steps in computer assisted surgery. Recent work has shown that depth estimation from a stereo image pair could be solved with convolutional neural networks. However, most recent depth estimation models were trained on datasets with per-pixel ground truth. Such data is especially rare for laparoscopic imaging, making it hard to apply supervised depth estimation to real surgical applications. To overcome this limitation, we propose SADepth, a new self-supervised depth estimation method based on Generative Adversarial Networks. It consists of an encoder-decoder generator and a discriminator to incorporate geometry constraints during training. Multi-scale outputs from the generator help to solve the local minima caused by the photometric reprojection loss, while the adversarial learning improves the framework generation quality. Extensive experiments on two public datasets show that SADepth outperforms recent state-of-the-art unsupervised methods by a large margin, and reduces the gap between supervised and unsupervised depth estimation in laparoscopic images.

Conference paper

Ahmad OF, Mori Y, Misawa M, Kudo S-E, Anderson JT, Bernal J, Berzin TM, Bisschops R, Byrne MF, Chen P-J, East J, Eelbode T, Elson DS, Gurudu S, Histace A, Karnes WE, Repici A, Singh R, Valdastri P, Wallace MB, Wang P, Stoyanov D, Lovat LBet al., 2021, Establishing key research questions for the implementation of artificial intelligence in colonoscopy - a modified Delphi method., Endoscopy, Vol: 53, Pages: 893-901, ISSN: 0013-726X

Background and Aims Artificial intelligence (AI) research in colonoscopy is progressing rapidly but widespread clinical implementation is not yet a reality. We aimed to identify the top implementation research priorities. Methods An established modified Delphi approach for research priority setting was used. Fifteen international experts, including endoscopists and translational computer scientists/engineers from 9 countries participated in an online survey over 9 months. Questions related to AI implementation in colonoscopy were generated as a long-list in the first round, and then scored in two subsequent rounds to identify the top 10 research questions. Results The top 10 ranked questions were categorised into 5 themes. Theme 1: Clinical trial design/end points (4 questions), related to optimum trial designs for polyp detection and characterisation, determining the optimal end-points for evaluation of AI and demonstrating impact on interval cancer rates. Theme 2: Technological Developments (3 questions), including improving detection of more challenging and advanced lesions, reduction of false positive rates and minimising latency. Theme 3: Clinical adoption/Integration (1 question) concerning effective combination of detection and characterisation into one workflow. Theme 4: Data access/annotation (1 question) concerning more efficient or automated data annotation methods to reduce the burden on human experts. Theme 5: Regulatory Approval (1 question) related to making regulatory approval processes more efficient. Conclusions This is the first reported international research priority setting exercise for AI in colonoscopy. The study findings should be used as a framework to guide future research with key stakeholders to accelerate the clinical implementation of AI in endoscopy.

Journal article

Kedrzycki MS, Leiloglou M, Chalau V, Chiarini N, Thiruchelvam PTR, Hadjiminas DJ, Hogben KR, Rashid F, Ramakrishnan R, Darzi AW, Elson DS, Leff DRet al., 2021, ASO visual abstract: the impact of temporal variation in indocyanine green administration on tumor identification during fluorescence-guided breast surgery, Annals of Surgical Oncology, Vol: 28, Pages: 650-651, ISSN: 1068-9265

Journal article

Leiloglou M, Kedrzycki MS, Elson DS, Leff DRet al., 2021, ASO author reflections: towards fluorescence guided tumor identification for precision breast conserving surgery., Annals of Surgical Oncology, ISSN: 1068-9265

Journal article

Kedrzycki MS, Leiloglou M, Chalau V, Chiarini N, Thiruchelvam PTR, Hadjiminas DJ, Hogben KR, Rashid F, Ramakrishnan R, Darzi AW, Elson DS, Leff DRet al., 2021, The impact of temporal variation in indocyanine green administration on tumor identification during fluorescence guided breast surgery., Annals of Surgical Oncology, Vol: 28, Pages: 5617-5625, ISSN: 1068-9265

BACKGROUND: On average, 21% of women in the USA treated with Breast Conserving Surgery (BCS) undergo a second operation because of close positive margins. Tumor identification with fluorescence imaging could improve positive margin rates through demarcating location, size, and invasiveness of tumors. We investigated the technique's diagnostic accuracy in detecting tumors during BCS using intravenous indocyanine green (ICG) and a custom-built fluorescence camera system. METHODS: In this single-center prospective clinical study, 40 recruited BCS patients were sub-categorized into two cohorts. In the first 'enhanced permeability and retention' (EPR) cohort, 0.25 mg/kg ICG was injected ~ 25 min prior to tumor excision, and in the second 'angiography' cohort, ~ 5 min prior to tumor excision. Subsequently, an in-house imaging system was used to image the tumor in situ prior to resection, ex vivo following resection, the resection bed, and during grossing in the histopathology laboratory to compare the technique's diagnostic accuracy between the cohorts. RESULTS: The two cohorts were matched in patient and tumor characteristics. The majority of patients had invasive ductal carcinoma with concomitant ductal carcinoma in situ. Tumor-to-background ratio (TBR) in the angiography cohort was superior to the EPR cohort (TBR = 3.18 ± 1.74 vs 2.10 ± 0.92 respectively, p = 0.023). Tumor detection reached sensitivity and specificity scores of 0.82 and 0.93 for the angiography cohort and 0.66 and 0.90 for the EPR cohort, respectively (p = 0.1051 and p = 0.9099). DISCUSSION: ICG administration timing during the angiography phase compared with the EPR phase improved TBR and diagnostic accuracy. Future work will focus on image pattern analysis and adaptation of the camera system to targeting fluorophores specific to breast cancer.

Journal article

Kedrzycki MS, Leiloglou M, Ashrafian H, Jiwa N, Thiruchelvam PTR, Elson DS, Leff DRet al., 2021, Meta-analysis comparing fluorescence imaging with radioisotope and blue dye-guided sentinel node identification for breast cancer surgery., Annals of Surgical Oncology, Vol: 28, Pages: 3738-3748, ISSN: 1068-9265

INTRODUCTION: Conventional methods for axillary sentinel lymph node biopsy (SLNB) are fraught with complications such as allergic reactions, skin tattooing, radiation, and limitations on infrastructure. A novel technique has been developed for lymphatic mapping utilizing fluorescence imaging. This meta-analysis aims to compare the gold standard blue dye and radioisotope (BD-RI) technique with fluorescence-guided SLNB using indocyanine green (ICG). METHODS: This study was registered with PROSPERO (CRD42019129224). The MEDLINE, EMBASE, Scopus, and Web of Science databases were searched using the Medical Subject Heading (MESH) terms 'Surgery' AND 'Lymph node' AND 'Near infrared fluorescence' AND 'Indocyanine green'. Studies containing raw data on the sentinel node identification rate in breast cancer surgery were included. A heterogeneity test (using Cochran's Q) determined the use of fixed- or random-effects models for pooled odds ratios (OR). RESULTS: Overall, 1748 studies were screened, of which 10 met the inclusion criteria for meta-analysis. ICG was equivalent to radioisotope (RI) at sentinel node identification (OR 2.58, 95% confidence interval [CI] 0.35-19.08, p < 0.05) but superior to blue dye (BD) (OR 9.07, 95% CI 6.73-12.23, p < 0.05). Furthermore, ICG was superior to the gold standard BD-RI technique (OR 4.22, 95% CI 2.17-8.20, p < 0.001). CONCLUSION: Fluorescence imaging for axillary sentinel node identification with ICG is equivalent to the single technique using RI, and superior to the dual technique (RI-BD) and single technique with BD. Hospitals using RI and/or BD could consider changing their practice to ICG given the comparable efficacy and improved safety profile, as well as the lesser burden on hospital infrastructure.

Journal article

Lin J, Clancy NT, Qi J, Hu Y, Tatla T, Stoyanov D, Maier-Hein L, Elson DSet al., 2021, Corrigendum to Dual-modality endoscopic probe for tissue surface shape reconstruction and hyperspectral imaging enabled by deep neural networks [Medical Image Analysis 48 (2018) 162-176/2018.06.004]., Medical Image Analysis, Vol: 72, Pages: 1-1, ISSN: 1361-8415

The first version of this article neglected to mention that this work was additionally supported by ERC award 637960. This has now been corrected online. The authors would like to apologise for any inconvenience caused.

Journal article

Leiloglou M, Chalau V, Kedrzycki MS, Thiruchelvam P, Darzi A, Leff DR, Elson DSet al., 2021, Tissue texture extraction in indocyanine green fluorescence imaging for breast-conserving surgery, Journal of Physics D: Applied Physics, Vol: 54, ISSN: 0022-3727

A two-camera fluorescence system for indocyanine green (ICG) signal detection has been developed and tested in a clinical feasibility trial of ten patients, with a resolution in the submillimetre scale. Immediately after systemic ICG injection, the two-camera system can detect ICG signals in vivo (~2.5 mg ${{\text{l}}^{ - 1}}$ or 3.2 × ${10^{ - 6}}{ }$ M). Qualitative assessment has shown that the fluorescence signal does not always correlate with the cancer location in the surgical scene. Conversely, fluorescence image texture metrics when used with the logistic regression model yields good accuracy scores in detecting cancer. We have demonstrated that intraoperative fluorescence imaging for resection guidance is a feasible solution to tackle the current challenge of positive resection margins in breast conserving surgery.

Journal article

Kedrzycki MS, Leiloglou M, Chalau V, Lin J, Thiruchelvam PTR, Elson DS, Leff DRet al., 2021, Guiding light to optimize wide local excisions: the "GLOW" study, Volume XXII 2021 Annual Meeting Scientific Session, Publisher: Springer, Pages: S199-S200, ISSN: 1068-9265

Conference paper

Kedrzycki M, Leiloglou M, Thiruchelvam P, Elson D, Leff Det al., 2021, P051. Fluorescence guided surgery in breast cancer: A systematic review of the literature, Association of Breast Surgery Conference 2021, Publisher: Elsevier, Pages: e309-e309, ISSN: 0748-7983

Conference paper

Kedrzycki M, Teh J, Cai W, Ezzat A, Thiruchelvam P, Elson D, Leff Det al., 2021, P053. Prospective single-centre qualitative service evaluation on magseed for wide local excision, Association of Breast Surgery Conference 2021, Publisher: Elsevier, Pages: e310-e310, ISSN: 0748-7983

Conference paper

Kedrzycki M, Leiloglou M, Leff D, Elson D, Chalau V, Thiruchelvam P, Darzi Aet al., 2021, Versatility in fluorescence guided surgery with the GLOW camera system, Surgical Life: The Journal of the Association of Surgeons of Great Britain and Ireland, Vol: 59

Journal article

Collins JW, Marcus HJ, Ghazi A, Sridhar A, Hashimoto D, Hager G, Arezzo A, Jannin P, Maier-Hein L, Marz K, Valdastri P, Mori K, Elson D, Giannarou S, Slack M, Hares L, Beaulieu Y, Levy J, Laplante G, Ramadorai A, Jarc A, Andrews B, Garcia P, Neemuchwala H, Andrusaite A, Kimpe T, Hawkes D, Kelly JD, Stoyanov Det al., 2021, Ethical implications of AI in robotic surgical training: A Delphi consensus statement, European Urology Focus, ISSN: 2405-4569

ContextAs the role of AI in healthcare continues to expand there is increasing awareness of the potential pitfalls of AI and the need for guidance to avoid them.ObjectivesTo provide ethical guidance on developing narrow AI applications for surgical training curricula. We define standardised approaches to developing AI driven applications in surgical training that address current recognised ethical implications of utilising AI on surgical data. We aim to describe an ethical approach based on the current evidence, understanding of AI and available technologies, by seeking consensus from an expert committee.Evidence acquisitionThe project was carried out in 3 phases: (1) A steering group was formed to review the literature and summarize current evidence. (2) A larger expert panel convened and discussed the ethical implications of AI application based on the current evidence. A survey was created, with input from panel members. (3) Thirdly, panel-based consensus findings were determined using an online Delphi process to formulate guidance. 30 experts in AI implementation and/or training including clinicians, academics and industry contributed. The Delphi process underwent 3 rounds. Additions to the second and third-round surveys were formulated based on the answers and comments from previous rounds. Consensus opinion was defined as ≥ 80% agreement.Evidence synthesisThere was 100% response from all 3 rounds. The resulting formulated guidance showed good internal consistency, with a Cronbach alpha of >0.8. There was 100% consensus that there is currently a lack of guidance on the utilisation of AI in the setting of robotic surgical training. Consensus was reached in multiple areas, including: 1. Data protection and privacy; 2. Reproducibility and transparency; 3. Predictive analytics; 4. Inherent biases; 5. Areas of training most likely to benefit from AI.ConclusionsUsing the Delphi methodology, we achieved international consensus among experts to develop and reach

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00302438&limit=30&person=true