Imperial College London


Faculty of EngineeringDyson School of Design Engineering

Senior Lecturer



+44 (0)20 7594 2584h.haddadi Website




Dyson BuildingSouth Kensington Campus






BibTex format

author = {Osia, SA and Shamsabadi, AS and Taheri, A and Katevas, K and Rabiee, HR and Lane, ND and Haddadi, H},
title = {Privacy-Preserving Deep Inference for Rich User Data on The Cloud},
url = {},

RIS format (EndNote, RefMan)

AB - Deep neural networks are increasingly being used in a variety of machinelearning applications applied to rich user data on the cloud. However, thisapproach introduces a number of privacy and efficiency challenges, as the cloudoperator can perform secondary inferences on the available data. Recently,advances in edge processing have paved the way for more efficient, and private,data processing at the source for simple tasks and lighter models, though theyremain a challenge for larger, and more complicated models. In this paper, wepresent a hybrid approach for breaking down large, complex deep models forcooperative, privacy-preserving analytics. We do this by breaking down thepopular deep architectures and fine-tune them in a particular way. We thenevaluate the privacy benefits of this approach based on the information exposedto the cloud service. We also asses the local inference cost of differentlayers on a modern handset for mobile applications. Our evaluations show thatby using certain kind of fine-tuning and embedding techniques and at a smallprocessing costs, we can greatly reduce the level of information available tounintended tasks applied to the data feature on the cloud, and hence achievingthe desired tradeoff between privacy and performance.
AU - Osia,SA
AU - Shamsabadi,AS
AU - Taheri,A
AU - Katevas,K
AU - Rabiee,HR
AU - Lane,ND
AU - Haddadi,H
TI - Privacy-Preserving Deep Inference for Rich User Data on The Cloud
UR -
ER -