Citation

BibTex format

@article{Jiang:2025:10.1016/j.media.2025.103673,
author = {Jiang, L and Ma, L and Yang, G},
doi = {10.1016/j.media.2025.103673},
journal = {Med Image Anal},
title = {Shadow defense against gradient inversion attack in federated learning.},
url = {http://dx.doi.org/10.1016/j.media.2025.103673},
volume = {105},
year = {2025}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Federated learning (FL) has emerged as a transformative framework for privacy-preserving distributed training, allowing clients to collaboratively train a global model without sharing their local data. This is especially crucial in sensitive fields like healthcare, where protecting patient data is paramount. However, privacy leakage remains a critical challenge, as the communication of model updates can be exploited by potential adversaries. Gradient inversion attacks (GIAs), for instance, allow adversaries to approximate the gradients used for training and reconstruct training images, thus stealing patient privacy. Existing defense mechanisms obscure gradients, yet lack a nuanced understanding of which gradients or types of image information are most vulnerable to such attacks. These indiscriminate calibrated perturbations result in either excessive privacy protection degrading model accuracy, or insufficient one failing to safeguard sensitive information. Therefore, we introduce a framework that addresses these challenges by leveraging a shadow model with interpretability for identifying sensitive areas. This enables a more targeted and sample-specific noise injection. Specially, our defensive strategy achieves discrepancies of 3.73 in PSNR and 0.2 in SSIM compared to the circumstance without defense on the ChestXRay dataset, and 2.78 in PSNR and 0.166 in the EyePACS dataset. Moreover, it minimizes adverse effects on model performance, with less than 1% F1 reduction compared to SOTA methods. Our extensive experiments, conducted across diverse types of medical images, validate the generalization of the proposed framework. The stable defense improvements for FedAvg are consistently over 1.5% times in LPIPS and SSIM. It also offers a universal defense against various GIA types, especially for these sensitive areas in images.
AU - Jiang,L
AU - Ma,L
AU - Yang,G
DO - 10.1016/j.media.2025.103673
PY - 2025///
TI - Shadow defense against gradient inversion attack in federated learning.
T2 - Med Image Anal
UR - http://dx.doi.org/10.1016/j.media.2025.103673
UR - https://www.ncbi.nlm.nih.gov/pubmed/40570807
VL - 105
ER -

Contact


For enquiries about the MRI Physics Collective, please contact:

Mary Finnegan
Senior MR Physicist at the Imperial College Healthcare NHS Trust

Pete Lally
Assistant Professor in Magnetic Resonance (MR) Physics at Imperial College

Jan Sedlacik
MR Physicist at the Robert Steiner MR Unit, Hammersmith Hospital Campus