Imperial College London

Professor Daniel Elson

Faculty of MedicineDepartment of Surgery & Cancer

Professor of Surgical Imaging
 
 
 
//

Contact

 

+44 (0)20 7594 1700daniel.elson Website CV

 
 
//

Location

 

415 Bessemer BuildingBessemer BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

505 results found

Wang C, Cartucho J, Elson D, Darzi A, Giannarou Set al., 2022, Towards autonomous control of surgical instruments using adaptive-fusion tracking and robot self-calibration, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 2395-2401, ISSN: 2153-0858

The ability to track surgical instruments in realtime is crucial for autonomous Robotic Assisted Surgery (RAS). Recently, the fusion of visual and kinematic data has been proposed to track surgical instruments. However, these methods assume that both sensors are equally reliable, and cannot successfully handle cases where there are significant perturbations in one of the sensors' data. In this paper, we address this problem by proposing an enhanced fusion-based method. The main advantage of our method is that it can adjust fusion weights to adapt to sensor perturbations and failures. Another problem is that before performing an autonomous task, these robots have to be repetitively recalibrated by a human for each new patient to estimate the transformations between the different robotic arms. To address this problem, we propose a self-calibration algorithm that empowers the robot to autonomously calibrate the transformations by itself in the beginning of the surgery. We applied our fusion and selfcalibration algorithms for autonomous ultrasound tissue scanning and we showed that the robot achieved stable ultrasound imaging when using our method. Our performance evaluation shows that our proposed method outperforms the state-of-art both in normal and challenging situations.

Conference paper

Saebe A, Wiwatpanit T, Varatthan T, Namporn T, Laungkulldej S, Thiabma R, Jaiboonma A, Sangiamsuntorn K, Elson D, Porter AE, Sathirakul K, Hongeng S, Ruenraroengsak Pet al., 2022, Comparative study between the 3D‐liver spheroid models developed from HepG2 and immortalized hepatocyte‐like cells with primary hepatic stellate coculture for drug metabolism analysis and anticancer drug screening, Advanced Therapeutics, Vol: 6, Pages: 1-16, ISSN: 2366-3987

Liver spheroids may be the best alternative models for evaluating efficacy and toxicity of the new anticancer candidates and diagnostics for hepatocellular carcinoma (HCC). Here, novel 3D-liver spheroid models are constructed from human hepatoma cells (HepG2)/ immortalized human hepatocyte-like cells (imHCs) with primary hepatic stellate cells (HSCs) coculture using the ultralow attachment technique. Spheroid morphology, HSC distribution, metabolic activity, protein expressions, and drug penetration are evaluated. All developed 3D spheroid models exhibit in spherical shape with narrow size distribution, diameter between 639–743 (HepG2-10%HSC) and 519–631 (imHC-10%HSC) µm. Both imHC mono and coculture models significantly express normal liver biomarkers at the higher level than HepG2 models. While 3D-HepG2 models significantly exhibit HCC biomarkers at the higher level than imHC models. HepG2 and imHC spheroids express basal cytochrom P450 (CYP450) enzymes at different levels depending on cell types, culture period, and ratio of coculture. Their metabolic activities for dextromethorphan (CYP2D6) tolbutamide (CYP2C9) and midazolam (CYP3A4) are routinely evaluated. For midazolam metabolism, imHC models allow the detection of phase II metabolic enzymes (UGT2B4 and UGT2B7). The presence of HSC in HepG2-HSC model increases biological barrier for doxorubicin (DOX) penetration, and escalates IC50 of DOX from 61.4 to 127.2 µg mL−1.

Journal article

Leiloglou M, Kedrzycki MS, Elson DS, Leff DRet al., 2022, ASO author reflections: towards fluorescence guided tumor identification for precision breast conserving surgery., Annals of Surgical Oncology, Vol: 29, Pages: 564-565, ISSN: 1068-9265

Journal article

Nazarian S, Gkouzionis I, Kawka M, Jamroziak M, Lloyd J, Darzi A, Patel N, Elson DS, Peters CJet al., 2022, Real-time tracking and classification of tumour and non-tumour tissue in upper gastrointestinal cancers using diffuse reflectance spectroscopy for resection margin assessment, JAMA Surgery, ISSN: 2168-6254

Importance:Cancers of the upper gastrointestinal tract remain a major contributor to the global cancer burden. The accurate mapping of tumour margins is of particular importance for curative cancer resection and improvement in overall survival. Current mapping techniques preclude a full resection margin assessment in real-time.Objective:We aimed to use diffuse reflectance spectroscopy on gastric and oesophageal cancer specimens to differentiate tissue types and provide real-time feedback to the operator.Design:This was a prospective ex vivo validation study. Patients undergoing oesophageal or gastric cancer resection were prospectively recruited into the study between July 2020 and July 2021 at Hammersmith Hospital in London, United Kingdom.Setting:This was a single-centre study based at a tertiary hospital.Participants:Tissue specimens were included for patients undergoing elective surgery for either oesophageal carcinoma (adenocarcinoma or squamous cell carcinoma) or gastric adenocarcinoma.Exposure:A hand-held diffuse reflectance spectroscopy probe and tracking system was used on freshly resected ex vivo tissue to obtain spectral data. Binary classification, following histopathological validation, was performed using four supervised machine learning classifiers. Main Outcomes and Measures:Data were divided into training and testing sets using a stratified 5-fold cross-validation method. Machine learning classifiers were evaluated in terms of sensitivity, specificity, overall accuracy, and the area under the curve.Results:A total of 14,097 mean spectra for normal and cancerous tissue were collected from 37 patients. The machine learning classifier achieved an overall normal versus cancer diagnostic accuracy of 93.86±0.66 for stomach tissue and 96.22±0.50 for oesophageal tissue, and sensitivity and specificity of 91.31% and 95.13% for stomach and 94.60% and 97.28% for oesophagus, respectively. Real-time tissue tracking and classification was achieved a

Journal article

Anichini G, Zepeng H, Leiloglu M, Gayo I, Patel N, Syed N, Nandi D, O'Neill K, Elson Det al., 2022, MULTISPECTRAL ANALYSIS FOR INTRA-OPERATIVE CHARACTERIZATION OF BRAIN TUMOURS, MARGINS OF RESECTION, AND ELOQUENT AREAS ACTIVATION - PRELIMINARY RESULTS, 27th Annual Scientific Meeting and Education Day of the Society-for-Neuro-Oncology (SNO), Publisher: OXFORD UNIV PRESS INC, Pages: 147-148, ISSN: 1522-8517

Conference paper

He C, Lin J, Chang J, Antonello J, Dai B, Wang J, Cui J, Qi J, Wu M, Elson DS, Xi P, Forbes A, Booth MJet al., 2022, Full Poincaré polarimetry enabled through physical inference, Optica, Vol: 9, Pages: 1109-1109

<jats:p>While polarization sensing is vital in many areas of research, with applications spanning from microscopy to aerospace, traditional approaches are limited by method-related error amplification, accumulation, and pre-processing steps, constraining the performance of single-shot polarimetry. Here, we propose a measurement paradigm that circumvents these limitations, based on the use of a universal full Poincaré generator to map all polarization analyzer states into a single vectorially structured light field. All vector components are analyzed in a single shot, extracting the vectorial state through inference from a physical model of the resulting image, providing a single-step sensing procedure. To demonstrate the feasibility of our approach, we use a common graded index (GRIN) optic as our mapping device and show mean errors of <jats:inline-formula> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo>&lt;<!-- < --></mml:mo> </mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mn>1</mml:mn> </mml:mrow> <mml:mi mathvariant="normal">%<!-- % --></mml:mi> </mml:math> </jats:inline-formula> for each vector component. Our work paves the way for next-generation polarimetry, impacting a wide variety of applications that rely on vector measurement.</jats:p>

Journal article

Elson D, Nazarian S, Gkouzionis I, Patel N, Darzi A, Peters Cet al., 2022, Real-time Classification of Colorectal Tissue Using Diffuse Reflectance Spectroscopy to Aid Margin Assessment, European Society of Coloproctology Scientific Conference

Conference paper

Huang B, Zheng J-Q, Nguyen A, Xu C, Gkouzionis I, Vyas K, Tuch D, Giannarou S, Elson DSet al., 2022, Self-supervised depth estimation in laparoscopic image using 3D geometric consistency, 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 13-22, ISSN: 0302-9743

Depth estimation is a crucial step for image-guided intervention in robotic surgery and laparoscopic imaging system. Since per-pixel depth ground truth is difficult to acquire for laparoscopic image data, it is rarely possible to apply supervised depth estimation to surgical applications. As an alternative, self-supervised methods have been introduced to train depth estimators using only synchronized stereo image pairs. However, most recent work focused on the left-right consistency in 2D and ignored valuable inherent 3D information on the object in real world coordinates, meaning that the left-right 3D geometric structural consistency is not fully utilized. To overcome this limitation, we present M3Depth, a self-supervised depth estimator to leverage 3D geometric structural information hidden in stereo pairs while keeping monocular inference. The method also removes the influence of border regions unseen in at least one of the stereo images via masking, to enhance the correspondences between left and right images in overlapping areas. Extensive experiments show that our method outperforms previous self-supervised approaches on both a public dataset and a newly acquired dataset by a large margin, indicating a good generalization across different samples and laparoscopes.

Conference paper

Huang B, Zheng J-Q, Giannarou S, Elson DSet al., 2022, H-Net: unsupervised attention-based stereo depth estimation leveraging epipolar geometry, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4459-4466

Depth estimation from a stereo image pair has become one of the most explored applications in computer vision, with most previous methods relying on fully supervised learning settings. However, due to the difficulty in acquiring accurate and scalable ground truth data, the training of fully supervised methods is challenging. As an alternative, self-supervised methods are becoming more popular to mitigate this challenge. In this paper, we introduce the H-Net, a deep-learning framework for unsupervised stereo depth estimation that leverages epipolar geometry to refine stereo matching. For the first time, a Siamese autoencoder architecture is used for depth estimation which allows mutual information between rectified stereo images to be extracted. To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features that lie on the same epipolar line while learning mutual information between the input stereo pair. Stereo correspondences are further enhanced by incorporating semantic information to the proposed attention mechanism. More specifically, the optimal transport algorithm is used to suppress attention and eliminate outliers in areas not visible in both cameras. Extensive experiments on KITTI2015 and Cityscapes show that the proposed modules are able to improve the performance of the unsupervised stereo depth estimation methods while closing the gap with the fully supervised approaches.

Conference paper

Leiloglou M, Kedrzycki M, Chalau V, Chiarini N, Thiruchelvam P, Hadjiminas D, Hogben K, Rashid F, Ramakrishnan R, Darzi A, Leff D, Elson Det al., 2022, Indocyanine green fluorescence image processing techniques for breast cancer macroscopic demarcation, Scientific Reports, Vol: 12, ISSN: 2045-2322

Re-operation due to disease being inadvertently close to the resection margin is a major challenge in breast conserving surgery (BCS). Indocyanine green (ICG) fluorescence imaging could be used to visualize the tumor boundaries and help surgeons resect disease more efficiently. In this work, ICG fluorescence and color images were acquired with a custom-built camera system from 40 patients treated with BCS. Images were acquired from the tumor in-situ, surgical cavity post-excision, freshly excised tumor and histopathology tumour grossing. Fluorescence image intensity and texture were used as individual or combined predictors in both logistic regression (LR) and support vector machine models to predict the tumor extent. ICG fluorescence spectra in formalin-fixed histopathology grossing tumor were acquired and analyzed. Our results showed that ICG remains in the tissue after formalin fixation. Therefore, tissue imaging could be validated in freshly excised and in formalin-fixed grossing tumor. The trained LR model with combined fluorescence intensity (pixel values) and texture (slope of power spectral density curve) identified the tumor’s extent in the grossing images with pixel-level resolution and sensitivity, specificity of 0.75 ± 0.3, 0.89 ± 0.2.This model was applied on tumor in-situ and surgical cavity (post-excision) images to predict tumor presence.

Journal article

Xu C, Huang B, Elson DS, 2022, Self-supervised monocular depth estimation with 3-D displacement module for laparoscopic images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 331-334, ISSN: 2576-3202

We present a novel self-supervised training framework with 3D displacement (3DD) module for accurately estimating per-pixel depth maps from single laparoscopic images. Recently, several self-supervised learning based monocular depth estimation models have achieved good results on the KITTI dataset, under the hypothesis that the camera is dynamic and the objects are stationary, however this hypothesis is often reversed in the surgical setting (laparoscope is stationary, the surgical instruments and tissues are dynamic). Therefore, a 3DD module is proposed to establish the relation between frames instead of ego-motion estimation. In the 3DD module, a convolutional neural network (CNN) analyses source and target frames to predict the 3D displacement of a 3D point cloud from a target frame to a source frame in the coordinates of the camera. Since it is difficult to constrain the depth displacement from two 2D images, a novel depth consistency module is proposed to maintain depth consistency between displacement-updated depth and model-estimated depth to constrain 3D displacement effectively. Our proposed method achieves remarkable performance for monocular depth estimation on the Hamlyn surgical dataset and acquired ground truth depth maps, outperforming monodepth, monodepth2 and packnet models.

Journal article

Huang B, Nguyen A, Wang S, Wang Z, Mayer E, Tuch D, Vyas K, Giannarou S, Elson DSet al., 2022, Simultaneous depth estimation and surgical tool segmentation in laparoscopic images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 335-338, ISSN: 2576-3202

Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.

Journal article

Kong L, Evans C, Su L, Elson DS, Wei Xet al., 2022, Special issue on translational biophotonics, Journal of Physics D: Applied Physics, Vol: 55, ISSN: 0022-3727

This special issue on 'Translational Biophotonics' was initiated when COVID-19 started to spread worldwide in early 2020, with the aim of introducing the advances in optical tools that have the ability to transform clinical diagnostics, surgical guidance, and therapeutic approaches that together can have a profound impact on global health. This issue achieves this goal comprehensively, covering various topics including optical techniques for clinical diagnostics, monitoring and treatment, in addition to fundamental studies in biomedicine.

Journal article

Shen Y, Chen B, He C, He H, Guo J, Wu J, Elson DS, Ma Het al., 2022, Polarization Aberrations in High-Numerical-Aperture Lens Systems and Their Effects on Vectorial-Information Sensing, REMOTE SENSING, Vol: 14

Journal article

Wang D, Qi J, Huang B, Noble E, Stoyanov D, Gao J, Elson DSet al., 2022, Polarization-based smoke removal method for surgical images, Biomedical Optics Express, Vol: 13, Pages: 2364-2364, ISSN: 2156-7085

Smoke generated during surgery affects tissue visibility and degrades image quality, affecting surgical decisions and limiting further image processing and analysis. Polarization is a fundamental property of light and polarization-resolved imaging has been studied and applied to general visibility restoration scenarios such as for smog or mist removal or in underwater environments. However, there is no related research or application for surgical smoke removal. Due to differences between surgical smoke and general haze scenarios, we propose an alternative imaging degradation model by redefining the form of the transmission parameters. The analysis of the propagation of polarized light interacting with the mixed medium of smoke and tissue is proposed to realize polarization-based smoke removal (visibility restoration). Theoretical analysis and observation of experimental data shows that the cross-polarized channel data generated by multiple scattering is less affected by smoke compared to the co-polarized channel. The polarization difference calculation for different color channels can estimate the model transmission parameters and reconstruct the image with restored visibility. Qualitative and quantitative comparison with alternative methods show that the polarization-based image smoke-removal method can effectively reduce the degradation of biomedical images caused by surgical smoke and partially restore the original degree of polarization of the samples.

Journal article

Shanthakumar D, Elson D, Darzi A, Leff Det al., 2022, Tissue optical imaging as an emerging technique for intraoperative margin assessment in breast-conserving surgery, Publisher: SPRINGER, Pages: 153-154, ISSN: 1068-9265

Conference paper

Han J, Davids J, Ashrafian H, Darzi A, Elson DS, Sodergren Met al., 2022, A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches, International Journal of Medical Robotics and Computer Assisted Surgery, Vol: 18, Pages: 1-11, ISSN: 1478-5951

BackgroundFrom traditional open surgery to laparoscopic surgery and robot-assisted surgery, advances in robotics, machine learning, and imaging are pushing the surgical approach to-wards better clinical outcomes. Pre-clinical and clinical evidence suggests that automation may standardise techniques, increase efficiency, and reduce clinical complications.MethodsA PRISMA-guided search was conducted across PubMed and OVID.ResultsOf the 89 screened articles, 51 met the inclusion criteria, with 10 included in the final review. Automatic data segmentation, trajectory planning, intra-operative registration, trajectory drilling, and soft tissue robotic surgery were discussed.ConclusionAlthough automated surgical systems remain conceptual, several research groups have developed supervised autonomous robotic surgical systems with increasing consideration for ethico-legal issues for automation. Automation paves the way for precision surgery and improved safety and opens new possibilities for deploying more robust artificial intelligence models, better imaging modalities and robotics to improve clinical outcomes.

Journal article

He C, Chang J, Salter PS, Shen Y, Dai B, Li P, Jin Y, Thodika SC, Li M, Tariq A, Wang J, Antonello J, Dong Y, Qi J, Lin J, Elson DS, Zhang M, He H, Ma H, Booth MJet al., 2022, Revealing complex optical phenomena through vectorial metrics, Advanced Photonics Research, Vol: 4, Pages: 1-9, ISSN: 2699-9293

Advances in vectorial polarization-resolved imaging are bringing new capabilities to applications ranging from fundamental physics through to clinical diagnosis. Imaging polarimetry requires determination of the Mueller matrix (MM) at every point, providing a complete description of an object’s vectorial properties. Despite forming a comprehensive representation, the MM does not usually provide easily interpretable information about the object’s internal structure. Certain simpler vectorial metrics are derived from subsets of the MM elements. These metrics permit extraction of signatures that provide direct indicators of hidden optical properties of complex systems, while featuring an intriguing asymmetry about what information can or cannot be inferred via these metrics. We harness such characteristics to reveal the spin Hall effect of light, infer microscopic structure within laser-written photonic waveguides, and conduct rapid pathological diagnosis through analysis of healthy and cancerous tissue. This provides new insight for the broader usage of such asymmetric inferred vectorial information.

Journal article

Gkouzionis I, Nazarian S, Kawka M, Darzi A, Patel N, Peters C, Elson Det al., 2022, Real-time tracking of a diffuse reflectance spectroscopy probe used to aid histological validation of margin assessment in upper gastrointestinal cancer resection surgery, Journal of Biomedical Optics, Vol: 27, ISSN: 1083-3668

Significance: Diffuse reflectance spectroscopy (DRS) allows discrimination of tissue type. Its application is limited by the inability to mark the scanned tissue and the lack of real-time measurements.Aim: This study aimed to develop a real-time tracking system to enable localization of a DRS probe to aid the classification of tumor and non-tumor tissue.Approach: A green-colored marker attached to the DRS probe was detected using hue-saturation-value (HSV) segmentation. A live, augmented view of tracked optical biopsy sites was recorded in real time. Supervised classifiers were evaluated in terms of sensitivity, specificity, and overall accuracy. A developed software was used for data collection, processing, and statistical analysis.Results: The measured root mean square error (RMSE) of DRS probe tip tracking was 1.18  ±  0.58  mm and 1.05  ±  0.28  mm for the x and y dimensions, respectively. The diagnostic accuracy of the system to classify tumor and non-tumor tissue in real time was 94% for stomach and 96% for the esophagus.Conclusions: We have successfully developed a real-time tracking and classification system for a DRS probe. When used on stomach and esophageal tissue for tumor detection, the accuracy derived demonstrates the strength and clinical value of the technique to aid margin assessment in cancer resection surgery.

Journal article

Gkouzionis I, Nazarian S, Kawka M, Darzi A, Patel N, Peters C, Elson Det al., 2022, Real-time tissue classification in stomach and oesophageal cancer based on optical tracking of a diffuse reflectance spectroscopy probe, Photonics West: Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XX

Conference paper

Huang B, Tuch D, Vyas K, Giannarou S, Elson Det al., 2022, Self-supervised monocular laparoscopic images depth estimation leveraging interactive closest point in 3D to enable image-guided radioguided surgery, European Molecular Imaging Meeting

Conference paper

Gkouzionis I, Nazarian S, Darzi A, Patel N, Elson D, Peters Cet al., 2022, Discriminating between cancer and healthy tissue in upper gastrointestinal cancer surgery using deep learning and diffuse reflectance spectroscopy, London Surgery Symposium

Conference paper

Gkouzionis I, Nazarian S, Patel N, Peters C, Elson DSet al., 2022, Towards real-time upper gastrointestinal resection margin assessment using a diffuse reflectance spectroscopy probe

The use of a diffuse reflectance spectroscopy probe for real-time classification of stomach and oesophageal tissue specimen can aid resection margin assessment in upper gastrointestinal cancer surgery.

Conference paper

Shanthakumar D, Chalau V, Darzi A, Elson D, Leff Det al., 2022, Multiwavelength laser induced fluorescence spectroscopy for breast cancer diagnostics, Association of Breast Surgery

Conference paper

Ford L, Chalau V, McKenzie J, Mason S, Manoli E, Goldin R, Takats Z, Elson D, Kinross Jet al., 2022, Comparison of Mass Spectrometry and Optical Spectroscopy for novel real time diagnosis of colorectal cancer, United European Gastroenterology Week

Conference paper

Qi J, Elson DS, 2022, Polarimetric Endoscopy, Polarized Light in Biomedical Imaging and Sensing: Clinical and Preclinical Applications, Pages: 179-204, ISBN: 9783031047404

Routine endoscopy mainly detects colour information about tissue under white light illumination. The development of polarimetric endoscopes can extend the image contrast beyond colour to complement the information obtained from routine endoscopy. Recognizing the potential benefits of obtaining polarimetric information for internal organ diagnostics, investigators have explored polarimetric endoscopy aiming towards polarimetric endoscopes that can be clinically validated and translated to surgical use. This chapter introduces the major types and typical designs of medical endoscopes and explains the main considerations when building polarimetric endoscopes. This chapter also describes the recent progress of polarimetric endoscopy, including a summary and outlook for this research area.

Book chapter

Elson D, Gkouzionis I, Nazarian S, Darzi A, Patel N, Peters Cet al., 2022, Real-time tracking of a diffuse reflectance spectroscopy probe for tissue classification in colorectal cancer surgery, Hamlyn Symposium on Medical Robotics Workshop on Sensing and biophotonics for surgical robotics and in vivo diagnostics

Conference paper

Elson D, Gkouzionis I, Nazarian S, Darzi A, Patel N, Peters Cet al., 2022, Using diffuse reflectance spectroscopy for real-time tissue assessment during upper gastrointestinal cancer surgery, IEEE International Conference on Biomedical and Health Informatics

Conference paper

Elson D, 2022, Polarization-resolved endoscopic surgical imaging (keynote), 16th International Conference on Laser Applications in Life Sciences

Conference paper

Gkouzionis I, Nazarian S, Darzi A, Patel N, Peters C, Elson Det al., 2022, Three-dimensional tissue reconstruction and tracking of a diffuse reflectance spectroscopy probe for real-time tissue classification in upper gastrointestinal cancer surgery, Photonics Europe: Clinical Biophotonics II

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00302438&person=true&page=2&respub-action=search.html