Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleZhou T, Li M, Ruan S, et al., 2026,
A reliable framework for brain tumor segmentation via multi-modal fusion and uncertainty modeling
, Information Fusion, Vol: 129, ISSN: 1566-2535Accurate brain tumor segmentation from MRI scans is critical for effective diagnosis and treatment planning. Recent advances in deep learning have significantly improved brain tumor segmentation performance. However, these models still face challenges in clinical adoption due to their inherent uncertainties and potential for errors. In this paper, we propose a novel MR brain tumor segmentation approach that integrates multi-modal data fusion and uncertainty quantification to improve the accuracy and reliability of brain tumor segmentation. Recognizing that each MR modality contributes unique insights into the tumor’s characteristics, we propose a novel modality-aware guidance by explicitly categorizing the modalities into ”teacher” (FLAIR and T1c) and ”student” (T2 and T1) groups. Since the teacher modalities are the most informative modalities for identifying brain tumors, we propose a multi-modal teacher-student fusion strategy. This strategy leverages the teacher modalities to guide the student modalities in both spatial and channel feature representation aspects. To address prediction reliability, we employ Monte Carlo dropout during training to generate multiple uncertainty estimates. Additionally, we develop a novel uncertainty-aware loss function that optimizes segmentation accuracy while quantifying the uncertainty in predictions. Experimental results conducted on three BraTS datasets demonstrate the effectiveness of the proposed components and the superior performance compared to the state-of-the-art methods, highlighting their potential for clinical application.
-
Journal articleHasan MK, Yang G, Yap CH, 2026,
An efficient, scalable, and adaptable plug-and-play temporal attention module for motion-guided cardiac segmentation with sparse temporal labels.
, Med Image Anal, Vol: 110Cardiac anatomy segmentation is essential for clinical assessment of cardiac function and disease diagnosis to inform treatment and intervention. Deep learning (DL) has improved cardiac anatomy segmentation accuracy, especially when information on cardiac motion dynamics is integrated into the networks. Several methods for incorporating motion information have been proposed; however, existing methods are not yet optimal: adding the time dimension to input data causes high computational costs, and incorporating registration into the segmentation network remains computationally costly and can be affected by errors of registration, especially with non-DL registration. While attention-based motion modeling is promising, suboptimal design constrains its capacity to learn the complex and coherent temporal interactions inherent in cardiac image sequences. Here, we propose a novel approach to incorporating motion information in the DL segmentation networks: a computationally efficient yet robust Temporal Attention Module (TAM), modeled as a small, multi-headed, cross-temporal attention module, which can be plug-and-play inserted into a broad range of segmentation networks (CNN, transformer, or hybrid) without a drastic architecture modification. Extensive experiments on multiple cardiac imaging datasets, such as 2D echocardiography (CAMUS and EchoNet-Dynamic), 3D echocardiography (MITEA), and 3D cardiac MRI (ACDC), confirm that TAM consistently improves segmentation performance across datasets when added to a range of networks, including UNet, FCN8s, UNetR, SwinUNetR, and the recent I2UNet and DT-VNet. Integrating TAM into SAM yields a temporal SAM that reduces Hausdorff distance (HD) from 3.99 mm to 3.51 mm on the CAMUS dataset, while integrating TAM into a pre-trained MedSAM reduces HD from 3.04 to 2.06 pixels after fine-tuning on the EchoNet-Dynamic dataset. On the ACDC 3D dataset, our TAM-UNet and TAM-DT-VNet achieve substantial reductions in HD, from 7.97 mm to 4.23 mm
-
Journal articleWang F, Wang Z, Li Y, et al., 2026,
Toward Modality- and Sampling-Universal Learning Strategies for Accelerating Cardiovascular Imaging: Summary of the CMRxRecon2024 Challenge.
, IEEE Trans Med Imaging, Vol: 45, Pages: 1872-1887Cardiovascular health is vital to human well-being, and cardiac magnetic resonance (CMR) imaging is considered the clinical reference standard for diagnosing cardiovascular disease. However, its adoption is hindered by long scan times, complex contrasts, and inconsistent quality. While deep learning methods perform well on specific CMR imaging sequences, they often fail to generalize across modalities and sampling schemes. The lack of benchmarks for high-quality, fast CMR image reconstruction further limits technology comparison and adoption. The CMRxRecon2024 challenge, attracting over 200 teams from 18 countries, addressed these issues with two tasks: generalization to unseen modalities and robustness to diverse undersampling patterns. We introduced the largest public multi-modality CMR raw dataset, an open benchmarking platform, and shared code. Analysis of the best-performing solutions revealed that prompt-based adaptation and enhanced physics-driven consistency enabled strong cross-scenario performance. These findings establish principles for generalizable reconstruction models and advance clinically translatable AI in cardiovascular imaging.
-
Journal articleDong Y, Xiao X, Zhuang X-X, et al., 2026,
DeepDrugDiscovery identifies blood–brain barrier permeable autophagy enhancers for Alzheimer’s disease
, Nature Biomedical Engineering, ISSN: 2157-846X -
Journal articleWen K, Ferreira PF, Di Biase Oemick A, et al., 2026,
Evaluation of Third-Order Motion-Compensated Cardiac Diffusion Tensor Imaging Across Cardiac Phases Using an Ultra-High-Performance Clinical Scanner.
, Magn Reson MedPURPOSE: To evaluate a third-order motion-compensated spin echo (M3-MCSE) sequence at multiple cardiac phases on a clinical 3 T MRI scanner with ultra-high performance (UHP) gradients (200 mT/m), compared with stimulated echo acquisition mode (STEAM) and second-order MCSE (M2-MCSE) for cardiac diffusion tensor imaging (cDTI). METHODS: Twenty healthy subjects underwent mid-ventricular short-axis cDTI at peak systole and diastasis using STEAM, M2-MCSE, and M3-MCSE. cDTI metrics and image quality were compared. In five additional healthy subjects, diffusion-weighted images were obtained at multiple trigger delays distributed over diastasis to assess motion-induced signal loss. RESULTS: Compared to M2-MCSE, M3-MCSE yielded higher systolic helix angle map scores ( p = 0.007 $$ p=0.007 $$ ) but lower diastolic scores ( p = 0.001 $$ p=0.001 $$ ), with no significant difference in mean diffusivity, fractional anisotropy, helix angle transmurality or sheetlet angle in systole/diastole. STEAM-derived apparent diffusion coefficients (ADC) were consistent across diastasis, while ADC for MCSE sequences increased at sub-optimal trigger delays. CONCLUSION: UHP gradients enabled in vivo evaluation of M3-MCSE, showing superior systolic cDTI but reduced diastolic performance versus M2-MCSE due to reduced signal-to-noise ratio and a longer motion-sensitive window. Future work may consider numerically optimized gradient designs to enhance MCSE robustness throughout the cardiac cycle.
-
Journal articleLuo Y, Ferreira PF, Wen K, et al., 2026,
Optimized Reduced Field of View and Fat Suppression Methods for Interleaved Multislice In Vivo Cardiac Diffusion Tensor Imaging.
, Magn Reson MedPURPOSE: Slice interleaving, a limited phase encode (PE) field of view (FOV), and effective fat suppression are vital for efficient cardiac diffusion tensor imaging (cDTI) with minimal artifacts. This study aimed to optimize reduced FOV and fat suppression methods for interleaved multislice cDTI to improve signal-to-noise ratio (SNR) and minimize artifacts. METHODS: Two-slice motion compensated spin echo datasets from 20 healthy volunteers were acquired. Four reduced PE FOV sequences were evaluated: 2DRF pulse; applying either 180 ° $$ {180}^{{}^{\circ}} $$ or 90 ° $$ {90}^{{}^{\circ}} $$ pulses in PE direction; and the proposed flip-back sequence with a nonselective 180 ° $$ {180}^{{}^{\circ}} $$ pulse after readout to restore inverted magnetization. Four fat suppression techniques were implemented: no fat suppression (standard); fat saturation; binomial water excitation and spectral attenuated inversion recovery (SPAIR). RESULTS: The proposed flip-back sequence with SPAIR achieved the highest median SNR, and its SNR values are significantly higher ( p < 0.01 $$ p<0.01 $$ ) than 2DRF with SPAIR as current state-of-the-art. SPAIR and water excitation demonstrated comparable performance when combined with the flip-back sequence, and both yielded superior image quality than with no suppression or fat saturation. SPAIR showed robust fat suppression across most subjects, whilst water excitation exhibited advantages in some subjects with a high body mass index. CONCLUSION: The proposed flip-back sequence with SPAIR enables efficient interleaved multislice imaging with reduced PE FOV and effective fat suppression, facilitating clinical translation of in vivo cDTI.
-
Journal articleZhang Z, Jing P, Wang Z, et al., 2026,
Cyclic Self-Supervised Diffusion for Ultra Low-field to High-field MRI Synthesis.
, IEEE Trans Med Imaging, Vol: PPSynthesizing high-quality images from low-field MRI holds significant potential. Low-field MRI is cheaper, more accessible, and safer, but suffers from low resolution and poor signal-to-noise ratio. This synthesis process can reduce reliance on costly acquisitions and expand data availability. However, synthesizing high-field MRI still suffers from a clinical fidelity gap. There is a need to preserve anatomical fidelity, enhance fine-grained structural details, and bridge domain gaps in image contrast. To address these issues, we propose a cyclic self-supervised diffusion (CSS-Diff) framework for high-field MRI synthesis from real low-field MRI data. Our core idea is to reformulate diffusion-based synthesis under a cycle-consistent constraint. It enforces anatomical preservation throughout the generative process rather than just relying on paired pixel-level supervision. The CSS-Diff framework further incorporates two novel processes. The slice-wise gap perception network aligns inter-slice inconsistencies via contrastive learning. The local structure correction network enhances local feature restoration through self-reconstruction of masked and perturbed patches. Extensive experiments on cross-field synthesis tasks demonstrate the effectiveness of our method, achieving state-of-the-art performance (e.g., 31.80 ± 2.70 dB in PSNR, 0.943± 0.102 in SSIM, and 0.0864 ± 0.0689 in LPIPS). Beyond pixel-wise fidelity, our method also preserves fine-grained anatomical structures compared with the original low-field MRI (e.g., left cerebral white matter error drops from 12.1% to 2.1%, cortex from 4.2% to 3.7%). To conclude, our CSS-Diff can synthesize images that are both quantitatively reliable and anatomically consistent. The code is available at: https://github.com/ayanglab/CSS-Diff.
-
Journal articleYeung M, Watts T, Tan SYW, et al., 2026,
Stain consistency learning: handling stain variation for automatic digital pathology segmentation
, IEEE Open Journal of Engineering in Medicine and Biology, ISSN: 2644-1276Abstract—Stain variation poses a major challenge for automated digital pathology. Numerous techniques address this issue, yet show limited success, especially outside H&E stains and classification tasks. We propose Stain Consistency Learning (SCL), combining stain-specific augmentation and a novel consistency loss to learn stain-invariant features. We conduct the first large scale evaluation of ten methods on Masson’s trichrome andH&E datasets for segmentation. Our results demonstrate that traditional stain normalization offers little benefit, while stain augmentation and adversarial learning significantly improve performance. SCL consistently outperforms all other methods.
-
Journal articleGao Y, Marshall D, Xing X, et al., 2026,
Anatomy-Guided Radiology Report Generation with Pathology-Aware Regional Prompts
, IEEE Open Journal of Engineering in Medicine and Biology, ISSN: 2644-1276 -
Journal articleLiao Y, Zheng Y, Zhu J, et al., 2026,
Self-attention-based mixture-of-experts framework for non-invasive prediction of MGMT promoter methylation in glioblastoma using multi-modal MRI
, Displays, Vol: 92, ISSN: 0141-9382Glioblastoma (GBM) is an aggressive brain tumor associated with poor prognosis and limited treatment options. The methylation status of the O6-methylguanine-DNA methyltransferase (MGMT) promoter is a critical biomarker for predicting the efficacy of temozolomide chemotherapy in GBM patients. However, current methods for determining MGMT promoter methylation, including invasive and costly techniques, hinder their widespread clinical application. In this study, we propose a novel non-invasive deep learning framework based on a Mixture-of-Experts (MoE) architecture for predicting MGMT promoter methylation status using multi-modal magnetic resonance imaging (MRI) data. Our MoE model incorporates modality-specific expert networks built on the ResNet18 architecture, with a self-attention-based gating mechanism that dynamically selects and integrates the most relevant features across MRI modalities (T1-weighted, contrast-enhanced T1, T2-weighted, and fluid-attenuated inversion recovery). We evaluate the proposed framework on the BraTS2021 and TCGA-GBM datasets, showing superior performance compared to conventional deep learning models in terms of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Furthermore, Grad-CAM visualizations provide enhanced interpretability by highlighting biologically relevant regions in the tumor and peritumoral areas that influence model predictions. The proposed framework represents a promising tool for integrating imaging biomarkers into precision oncology workflows, offering a scalable, cost-effective, and interpretable solution for non-invasive MGMT methylation prediction in GBM.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Contact
For enquiries about the MRI Physics Collective, please contact:
Mary Finnegan
Senior MR Physicist at the Imperial College Healthcare NHS Trust
Pete Lally
Assistant Professor in Magnetic Resonance (MR) Physics at Imperial College
Jan Sedlacik
MR Physicist at the Robert Steiner MR Unit, Hammersmith Hospital Campus