Browse through all publications from the Institute of Global Health Innovation, which our Patient Safety Research Collaboration is part of. This feed includes reports and research papers from our Centre. 

Citation

BibTex format

@article{Xu:2025:10.1016/j.media.2025.103917,
author = {Xu, C and Roddan, A and Kakaletri, I and Charalampaki, P and Giannarou, S},
doi = {10.1016/j.media.2025.103917},
journal = {Med Image Anal},
title = {Interpretable classification of endomicroscopic brain data via saliency consistent contrastive learning.},
url = {http://dx.doi.org/10.1016/j.media.2025.103917},
volume = {109},
year = {2025}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - In neurosurgery, accurate brain tissue characterization via probe-based Confocal Laser Endomicroscopy (pCLE) has become popular for guiding surgical decisions and ensuring safe tumour resections. In order to enable surgeons to trust a tissue classification model, interpretability of the result is required. However, state-of-the-art (SOTA) deep learning models for pCLE data classification exhibit limited interpretability. This paper introduces a novel image classification framework for interpretable brain tissue characterisation using pCLE data. Firstly, instead of the commonly employed cross-entropy based classification loss, we propose Label Contrastive Learning (LCL) loss to learn intra-category similarities and inter-category contrasts. We are then able to generate highly representative data embeddings, which not only improve classification performance but also distinguish characteristics from different tissue classes. Secondly, we design a Saliency Consistency (SC) module to enable the trained model to generate clinically relevant saliency maps of the input data. To further refine the saliency maps, a novel Top-K Maximum and Minimum Pooling (TK-MMP) layer is introduced to our SC module, to increase the contrast of saliency values between non-clinically relevant and clinically relevant areas. For the first time, the Exponential Moving Average (EMA) is used in a novel fashion to update global embeddings of the different tissue categories rather than the weights of the model. In addition, we propose a Global Embedding Inference (GEI) layer to replace learnable classification layers to achieve more robust classification by estimating the cosine similarity between the input data embeddings and global embeddings. Performance evaluation on ex-vivo and in-vivo pCLE brain data verifies that our proposed approach outperforms SOTA classification models in terms of accuracy, robustness and interpretability. Our source codes are released at: https://github.com/XC9292/LCL-SC.
AU - Xu,C
AU - Roddan,A
AU - Kakaletri,I
AU - Charalampaki,P
AU - Giannarou,S
DO - 10.1016/j.media.2025.103917
PY - 2025///
TI - Interpretable classification of endomicroscopic brain data via saliency consistent contrastive learning.
T2 - Med Image Anal
UR - http://dx.doi.org/10.1016/j.media.2025.103917
UR - https://www.ncbi.nlm.nih.gov/pubmed/41456554
VL - 109
ER -

NIHR logo