477 results found
Liu JTC, Bale G, Choe R, et al., 2023, Introduction to the Biophotonics Congress 2022 feature issue, BIOMEDICAL OPTICS EXPRESS, Vol: 14, Pages: 385-386, ISSN: 2156-7085
Kedrzycki M, Elson D, Leff D, 2022, Guidance in breast-conserving surgery: tumour localization versus identification, British Journal of Surgery, ISSN: 0007-1323
In breast-conserving surgery (BCS), the tumour is removed with the goal of preserving as much healthy breast tissue as possible. Breast conservation comes with a risk of positive resection margins, an independent predictor of ipsilateral tumour recurrence, necessitating reoperation1. Contemporary data from the UK Get it Right First Time1 suggest high average reoperation rates of around 19 %. Current tumour localization techniques can only guide surgeons to the tumour epicentre, but fail to provide identification of the boundary between tumour and normal tissue. Imaging techniques, such as intraoperative ultrasonography (IOUS), intraoperative MRI (iMRI) or fluorescence-guided surgery (FGS), enable visualization of the tumour in its entirety and may provide improved operative precision2–5.
Saebe A, Wiwatpanit T, Varatthan T, et al., 2022, Comparative study between the 3D‐liver spheroid models developed from HepG2 and immortalized hepatocyte‐like cells with primary hepatic stellate coculture for drug metabolism analysis and anticancer drug screening, Advanced Therapeutics, Vol: 6, Pages: 1-16, ISSN: 2366-3987
Liver spheroids may be the best alternative models for evaluating efficacy and toxicity of the new anticancer candidates and diagnostics for hepatocellular carcinoma (HCC). Here, novel 3D-liver spheroid models are constructed from human hepatoma cells (HepG2)/ immortalized human hepatocyte-like cells (imHCs) with primary hepatic stellate cells (HSCs) coculture using the ultralow attachment technique. Spheroid morphology, HSC distribution, metabolic activity, protein expressions, and drug penetration are evaluated. All developed 3D spheroid models exhibit in spherical shape with narrow size distribution, diameter between 639–743 (HepG2-10%HSC) and 519–631 (imHC-10%HSC) µm. Both imHC mono and coculture models significantly express normal liver biomarkers at the higher level than HepG2 models. While 3D-HepG2 models significantly exhibit HCC biomarkers at the higher level than imHC models. HepG2 and imHC spheroids express basal cytochrom P450 (CYP450) enzymes at different levels depending on cell types, culture period, and ratio of coculture. Their metabolic activities for dextromethorphan (CYP2D6) tolbutamide (CYP2C9) and midazolam (CYP3A4) are routinely evaluated. For midazolam metabolism, imHC models allow the detection of phase II metabolic enzymes (UGT2B4 and UGT2B7). The presence of HSC in HepG2-HSC model increases biological barrier for doxorubicin (DOX) penetration, and escalates IC50 of DOX from 61.4 to 127.2 µg mL−1.
Leiloglou M, Kedrzycki MS, Elson DS, et al., 2022, ASO author reflections: towards fluorescence guided tumor identification for precision breast conserving surgery., Annals of Surgical Oncology, Vol: 29, Pages: 564-565, ISSN: 1068-9265
Avila-Rencoret F, Mylonas G, Elson D, 2022, Robotic large-area optical biopsy imaging for automated detection of gastrointestinal cancers tested in tissue phantoms and ex vivo porcine bowel, Translational Biophotonics, ISSN: 2627-1850
Gastrointestinal endoscopy is a subjective procedure that frequently requires tissue samples for diagnosis. Contact optical biopsy (OB) techniques have the aim of providing direct diagnosis of endoscopic areas without excising tissue samples but lack the wide-area coverage required for locating and resecting lesions. This article presents a large-area robotically deployed OB imaging platform for endoscopic detection of colorectal cancer as an add-on for conventional endoscopes. In vitro, in silicon colon phantoms, the platform achieves an optical resolution of 0.5 line pairs per millimeter, while resolving simulated cancer lesions down to 0.75 mm diameter across large-area images (55-103 cm2). Large-area OB images were generated in an ex vivo porcine colon. The platform allows centimeter-sized large-area OB imaging in vitro and ex vivo with submillimeter resolution, including automatic data segmentation of simulated cancer areas. The ability for robotic actuation and spectrum collection is also shown for ex vivo animal colon. If successful, this technology could widen access to user-independent high-quality endoscopy and early detection of gastrointestinal cancers.
Nazarian S, Gkouzionis I, Kawka M, et al., 2022, Real-time tracking and classification of tumour and non-tumour tissue in upper gastrointestinal cancers using diffuse reflectance spectroscopy for resection margin assessment, JAMA Surgery, ISSN: 2168-6254
Importance:Cancers of the upper gastrointestinal tract remain a major contributor to the global cancer burden. The accurate mapping of tumour margins is of particular importance for curative cancer resection and improvement in overall survival. Current mapping techniques preclude a full resection margin assessment in real-time.Objective:We aimed to use diffuse reflectance spectroscopy on gastric and oesophageal cancer specimens to differentiate tissue types and provide real-time feedback to the operator.Design:This was a prospective ex vivo validation study. Patients undergoing oesophageal or gastric cancer resection were prospectively recruited into the study between July 2020 and July 2021 at Hammersmith Hospital in London, United Kingdom.Setting:This was a single-centre study based at a tertiary hospital.Participants:Tissue specimens were included for patients undergoing elective surgery for either oesophageal carcinoma (adenocarcinoma or squamous cell carcinoma) or gastric adenocarcinoma.Exposure:A hand-held diffuse reflectance spectroscopy probe and tracking system was used on freshly resected ex vivo tissue to obtain spectral data. Binary classification, following histopathological validation, was performed using four supervised machine learning classifiers. Main Outcomes and Measures:Data were divided into training and testing sets using a stratified 5-fold cross-validation method. Machine learning classifiers were evaluated in terms of sensitivity, specificity, overall accuracy, and the area under the curve.Results:A total of 14,097 mean spectra for normal and cancerous tissue were collected from 37 patients. The machine learning classifier achieved an overall normal versus cancer diagnostic accuracy of 93.86±0.66 for stomach tissue and 96.22±0.50 for oesophageal tissue, and sensitivity and specificity of 91.31% and 95.13% for stomach and 94.60% and 97.28% for oesophagus, respectively. Real-time tissue tracking and classification was achieved a
Anichini G, Zepeng H, Leiloglu M, et al., 2022, MULTISPECTRAL ANALYSIS FOR INTRA-OPERATIVE CHARACTERIZATION OF BRAIN TUMOURS, MARGINS OF RESECTION, AND ELOQUENT AREAS ACTIVATION - PRELIMINARY RESULTS, 27th Annual Scientific Meeting and Education Day of the Society-for-Neuro-Oncology (SNO), Publisher: OXFORD UNIV PRESS INC, Pages: 147-148, ISSN: 1522-8517
Qi J, Tatla T, Nissanka-Jayasuriya E, et al., 2022, Surgical polarimetric endoscopy for the detection of laryngeal cancer, Nature Biomedical Engineering, ISSN: 2157-846X
The standard-of-care for the detection of laryngeal pathologies involves distinguishing suspicious lesions from surrounding healthy tissue via contrasts in colour and texture captured by white-light endoscopy. However, the technique is insufficiently sensitive and thus leads to unsatisfactory rates of false negatives. Here, we show that laryngeal lesions can be better detected in real time by taking advantage of differences in the light-polarization properties of cancer and healthy tissues. By measuring differences in polarized-light reflectance, the technique, which we named ‘surgical polarimetric endoscopy’ (SPE), generates about one-order-of-magnitude greater contrast than white-light endoscopy, and hence allows for the better discrimination of cancerous lesions, as we show with patients diagnosed with squamous cell carcinoma. Polarimetric imaging of excised and stained slices of laryngeal tissue with SPE indicated that changes in the retardance of polarized light can be largely attributed to architectural features of the tissue. We also assessed SPE to aid routine transoral laser surgery for the removal of a cancerous lesion, indicating that SPE can complement white-light endoscopy for the detection of laryngeal cancer.
He C, Lin J, Chang J, et al., 2022, Full Poincaré polarimetry enabled through physical inference, Optica, Vol: 9, Pages: 1109-1109
<jats:p>While polarization sensing is vital in many areas of research, with applications spanning from microscopy to aerospace, traditional approaches are limited by method-related error amplification, accumulation, and pre-processing steps, constraining the performance of single-shot polarimetry. Here, we propose a measurement paradigm that circumvents these limitations, based on the use of a universal full Poincaré generator to map all polarization analyzer states into a single vectorially structured light field. All vector components are analyzed in a single shot, extracting the vectorial state through inference from a physical model of the resulting image, providing a single-step sensing procedure. To demonstrate the feasibility of our approach, we use a common graded index (GRIN) optic as our mapping device and show mean errors of <jats:inline-formula> <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML" display="inline"> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mo><<!-- < --></mml:mo> </mml:mrow> <mml:mrow class="MJX-TeXAtom-ORD"> <mml:mn>1</mml:mn> </mml:mrow> <mml:mi mathvariant="normal">%<!-- % --></mml:mi> </mml:math> </jats:inline-formula> for each vector component. Our work paves the way for next-generation polarimetry, impacting a wide variety of applications that rely on vector measurement.</jats:p>
Elson D, Nazarian S, Gkouzionis I, et al., 2022, Real-time Classification of Colorectal Tissue Using Diffuse Reflectance Spectroscopy to Aid Margin Assessment, European Society of Coloproctology Scientific Conference
Huang B, Zheng J-Q, Nguyen A, et al., 2022, Self-supervised depth estimation in laparoscopic image using 3D geometric consistency, 25th International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 13-22, ISSN: 0302-9743
Depth estimation is a crucial step for image-guided intervention in robotic surgery and laparoscopic imaging system. Since per-pixel depth ground truth is difficult to acquire for laparoscopic image data, it is rarely possible to apply supervised depth estimation to surgical applications. As an alternative, self-supervised methods have been introduced to train depth estimators using only synchronized stereo image pairs. However, most recent work focused on the left-right consistency in 2D and ignored valuable inherent 3D information on the object in real world coordinates, meaning that the left-right 3D geometric structural consistency is not fully utilized. To overcome this limitation, we present M3Depth, a self-supervised depth estimator to leverage 3D geometric structural information hidden in stereo pairs while keeping monocular inference. The method also removes the influence of border regions unseen in at least one of the stereo images via masking, to enhance the correspondences between left and right images in overlapping areas. Extensive experiments show that our method outperforms previous self-supervised approaches on both a public dataset and a newly acquired dataset by a large margin, indicating a good generalization across different samples and laparoscopes.
Huang B, Zheng J-Q, Giannarou S, et al., 2022, H-Net: unsupervised attention-based stereo depth estimation leveraging epipolar geometry, IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Publisher: IEEE, Pages: 4459-4466
Depth estimation from a stereo image pair has become one of the most explored applications in computer vision, with most previous methods relying on fully supervised learning settings. However, due to the difficulty in acquiring accurate and scalable ground truth data, the training of fully supervised methods is challenging. As an alternative, self-supervised methods are becoming more popular to mitigate this challenge. In this paper, we introduce the H-Net, a deep-learning framework for unsupervised stereo depth estimation that leverages epipolar geometry to refine stereo matching. For the first time, a Siamese autoencoder architecture is used for depth estimation which allows mutual information between rectified stereo images to be extracted. To enforce the epipolar constraint, the mutual epipolar attention mechanism has been designed which gives more emphasis to correspondences of features that lie on the same epipolar line while learning mutual information between the input stereo pair. Stereo correspondences are further enhanced by incorporating semantic information to the proposed attention mechanism. More specifically, the optimal transport algorithm is used to suppress attention and eliminate outliers in areas not visible in both cameras. Extensive experiments on KITTI2015 and Cityscapes show that the proposed modules are able to improve the performance of the unsupervised stereo depth estimation methods while closing the gap with the fully supervised approaches.
Leiloglou M, Kedrzycki M, Chalau V, et al., 2022, Indocyanine green fluorescence image processing techniques for breast cancer macroscopic demarcation, Scientific Reports, Vol: 12, ISSN: 2045-2322
Re-operation due to disease being inadvertently close to the resection margin is a major challenge in breast conserving surgery (BCS). Indocyanine green (ICG) fluorescence imaging could be used to visualize the tumor boundaries and help surgeons resect disease more efficiently. In this work, ICG fluorescence and color images were acquired with a custom-built camera system from 40 patients treated with BCS. Images were acquired from the tumor in-situ, surgical cavity post-excision, freshly excised tumor and histopathology tumour grossing. Fluorescence image intensity and texture were used as individual or combined predictors in both logistic regression (LR) and support vector machine models to predict the tumor extent. ICG fluorescence spectra in formalin-fixed histopathology grossing tumor were acquired and analyzed. Our results showed that ICG remains in the tissue after formalin fixation. Therefore, tissue imaging could be validated in freshly excised and in formalin-fixed grossing tumor. The trained LR model with combined fluorescence intensity (pixel values) and texture (slope of power spectral density curve) identified the tumor’s extent in the grossing images with pixel-level resolution and sensitivity, specificity of 0.75 ± 0.3, 0.89 ± 0.2.This model was applied on tumor in-situ and surgical cavity (post-excision) images to predict tumor presence.
Xu C, Huang B, Elson DS, 2022, Self-supervised monocular depth estimation with 3-D displacement module for laparoscopic images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 331-334, ISSN: 2576-3202
We present a novel self-supervised training framework with 3D displacement (3DD) module for accurately estimating per-pixel depth maps from single laparoscopic images. Recently, several self-supervised learning based monocular depth estimation models have achieved good results on the KITTI dataset, under the hypothesis that the camera is dynamic and the objects are stationary, however this hypothesis is often reversed in the surgical setting (laparoscope is stationary, the surgical instruments and tissues are dynamic). Therefore, a 3DD module is proposed to establish the relation between frames instead of ego-motion estimation. In the 3DD module, a convolutional neural network (CNN) analyses source and target frames to predict the 3D displacement of a 3D point cloud from a target frame to a source frame in the coordinates of the camera. Since it is difficult to constrain the depth displacement from two 2D images, a novel depth consistency module is proposed to maintain depth consistency between displacement-updated depth and model-estimated depth to constrain 3D displacement effectively. Our proposed method achieves remarkable performance for monocular depth estimation on the Hamlyn surgical dataset and acquired ground truth depth maps, outperforming monodepth, monodepth2 and packnet models.
Huang B, Nguyen A, Wang S, et al., 2022, Simultaneous depth estimation and surgical tool segmentation in laparoscopic images, IEEE Transactions on Medical Robotics and Bionics, Vol: 4, Pages: 335-338, ISSN: 2576-3202
Surgical instrument segmentation and depth estimation are crucial steps to improve autonomy in robotic surgery. Most recent works treat these problems separately, making the deployment challenging. In this paper, we propose a unified framework for depth estimation and surgical tool segmentation in laparoscopic images. The network has an encoder-decoder architecture and comprises two branches for simultaneously performing depth estimation and segmentation. To train the network end to end, we propose a new multi-task loss function that effectively learns to estimate depth in an unsupervised manner, while requiring only semi-ground truth for surgical tool segmentation. We conducted extensive experiments on different datasets to validate these findings. The results showed that the end-to-end network successfully improved the state-of-the-art for both tasks while reducing the complexity during their deployment.
Kong L, Evans C, Su L, et al., 2022, Special issue on translational biophotonics, Journal of Physics D: Applied Physics, Vol: 55, ISSN: 0022-3727
This special issue on 'Translational Biophotonics' was initiated when COVID-19 started to spread worldwide in early 2020, with the aim of introducing the advances in optical tools that have the ability to transform clinical diagnostics, surgical guidance, and therapeutic approaches that together can have a profound impact on global health. This issue achieves this goal comprehensively, covering various topics including optical techniques for clinical diagnostics, monitoring and treatment, in addition to fundamental studies in biomedicine.
Shen Y, Chen B, He C, et al., 2022, Polarization Aberrations in High-Numerical-Aperture Lens Systems and Their Effects on Vectorial-Information Sensing, REMOTE SENSING, Vol: 14
Shanthakumar D, Elson D, Darzi A, et al., 2022, Tissue optical imaging as an emerging technique for intraoperative margin assessment in breast-conserving surgery, Publisher: SPRINGER, Pages: 153-154, ISSN: 1068-9265
Wang D, Qi J, Huang B, et al., 2022, Polarization-based smoke removal method for surgical images, Biomedical Optics Express, Vol: 13, Pages: 2364-2364, ISSN: 2156-7085
Smoke generated during surgery affects tissue visibility and degrades image quality, affecting surgical decisions and limiting further image processing and analysis. Polarization is a fundamental property of light and polarization-resolved imaging has been studied and applied to general visibility restoration scenarios such as for smog or mist removal or in underwater environments. However, there is no related research or application for surgical smoke removal. Due to differences between surgical smoke and general haze scenarios, we propose an alternative imaging degradation model by redefining the form of the transmission parameters. The analysis of the propagation of polarized light interacting with the mixed medium of smoke and tissue is proposed to realize polarization-based smoke removal (visibility restoration). Theoretical analysis and observation of experimental data shows that the cross-polarized channel data generated by multiple scattering is less affected by smoke compared to the co-polarized channel. The polarization difference calculation for different color channels can estimate the model transmission parameters and reconstruct the image with restored visibility. Qualitative and quantitative comparison with alternative methods show that the polarization-based image smoke-removal method can effectively reduce the degradation of biomedical images caused by surgical smoke and partially restore the original degree of polarization of the samples.
Han J, Davids J, Ashrafian H, et al., 2022, A systematic review of robotic surgery: From supervised paradigms to fully autonomous robotic approaches, International Journal of Medical Robotics and Computer Assisted Surgery, Vol: 18, Pages: 1-11, ISSN: 1478-5951
BackgroundFrom traditional open surgery to laparoscopic surgery and robot-assisted surgery, advances in robotics, machine learning, and imaging are pushing the surgical approach to-wards better clinical outcomes. Pre-clinical and clinical evidence suggests that automation may standardise techniques, increase efficiency, and reduce clinical complications.MethodsA PRISMA-guided search was conducted across PubMed and OVID.ResultsOf the 89 screened articles, 51 met the inclusion criteria, with 10 included in the final review. Automatic data segmentation, trajectory planning, intra-operative registration, trajectory drilling, and soft tissue robotic surgery were discussed.ConclusionAlthough automated surgical systems remain conceptual, several research groups have developed supervised autonomous robotic surgical systems with increasing consideration for ethico-legal issues for automation. Automation paves the way for precision surgery and improved safety and opens new possibilities for deploying more robust artificial intelligence models, better imaging modalities and robotics to improve clinical outcomes.
He C, Chang J, Salter PS, et al., 2022, Revealing complex optical phenomena through vectorial metrics, Advanced Photonics Research, Vol: 4, Pages: 1-9, ISSN: 2699-9293
Advances in vectorial polarization-resolved imaging are bringing new capabilities to applications ranging from fundamental physics through to clinical diagnosis. Imaging polarimetry requires determination of the Mueller matrix (MM) at every point, providing a complete description of an object’s vectorial properties. Despite forming a comprehensive representation, the MM does not usually provide easily interpretable information about the object’s internal structure. Certain simpler vectorial metrics are derived from subsets of the MM elements. These metrics permit extraction of signatures that provide direct indicators of hidden optical properties of complex systems, while featuring an intriguing asymmetry about what information can or cannot be inferred via these metrics. We harness such characteristics to reveal the spin Hall effect of light, infer microscopic structure within laser-written photonic waveguides, and conduct rapid pathological diagnosis through analysis of healthy and cancerous tissue. This provides new insight for the broader usage of such asymmetric inferred vectorial information.
Gkouzionis I, Nazarian S, Kawka M, et al., 2022, Real-time tracking of a diffuse reflectance spectroscopy probe used to aid histological validation of margin assessment in upper gastrointestinal cancer resection surgery, Journal of Biomedical Optics, Vol: 27, ISSN: 1083-3668
Significance: Diffuse reflectance spectroscopy (DRS) allows discrimination of tissue type. Its application is limited by the inability to mark the scanned tissue and the lack of real-time measurements.Aim: This study aimed to develop a real-time tracking system to enable localization of a DRS probe to aid the classification of tumor and non-tumor tissue.Approach: A green-colored marker attached to the DRS probe was detected using hue-saturation-value (HSV) segmentation. A live, augmented view of tracked optical biopsy sites was recorded in real time. Supervised classifiers were evaluated in terms of sensitivity, specificity, and overall accuracy. A developed software was used for data collection, processing, and statistical analysis.Results: The measured root mean square error (RMSE) of DRS probe tip tracking was 1.18 ± 0.58 mm and 1.05 ± 0.28 mm for the x and y dimensions, respectively. The diagnostic accuracy of the system to classify tumor and non-tumor tissue in real time was 94% for stomach and 96% for the esophagus.Conclusions: We have successfully developed a real-time tracking and classification system for a DRS probe. When used on stomach and esophageal tissue for tumor detection, the accuracy derived demonstrates the strength and clinical value of the technique to aid margin assessment in cancer resection surgery.
Gkouzionis I, Nazarian S, Darzi A, et al., 2022, Three-dimensional tissue reconstruction and tracking of a diffuse reflectance spectroscopy probe for real-time tissue classification in upper gastrointestinal cancer surgery, Photonics Europe: Clinical Biophotonics II
Gkouzionis I, Nazarian S, Kawka M, et al., 2022, Real-time tissue classification in stomach and oesophageal cancer based on optical tracking of a diffuse reflectance spectroscopy probe, Photonics West: Advanced Biomedical and Clinical Diagnostic and Surgical Guidance Systems XX
Huang B, Tuch D, Vyas K, et al., 2022, Self-supervised monocular laparoscopic images depth estimation leveraging interactive closest point in 3D to enable image-guided radioguided surgery, European Molecular Imaging Meeting
Gkouzionis I, Nazarian S, Patel N, et al., 2022, Towards real-time upper gastrointestinal resection margin assessment using a diffuse reflectance spectroscopy probe
The use of a diffuse reflectance spectroscopy probe for real-time classification of stomach and oesophageal tissue specimen can aid resection margin assessment in upper gastrointestinal cancer surgery.
Elson D, 2022, Multispectral and polarization-resolved endoscopic surgical imaging (invited), Photon 2022
Shanthakumar D, Chalau V, Darzi A, et al., 2022, Multiwavelength laser induced fluorescence spectroscopy for breast cancer diagnostics, Association of Breast Surgery
Ford L, Chalau V, McKenzie J, et al., 2022, Comparison of Mass Spectrometry and Optical Spectroscopy for novel real time diagnosis of colorectal cancer, United European Gastroenterology Week
Elson D, 2022, Polarization-resolved endoscopic surgical imaging (keynote), 16th International Conference on Laser Applications in Life Sciences
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.