Imperial College London

ProfessorKinLeung

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Tanaka Chair in Internet Technology
 
 
 
//

Contact

 

+44 (0)20 7594 6238kin.leung Website

 
 
//

Assistant

 

Miss Vanessa Rodriguez-Gonzalez +44 (0)20 7594 6267

 
//

Location

 

810aElectrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Tuor:2018:10.1117/12.2306000,
author = {Tuor, T and Wang, S and Leung, KK and Ko, BJ},
doi = {10.1117/12.2306000},
publisher = {Proceedings of SPIE},
title = {Understanding information leakage of distributed inference with deep neural networks: Overview of information theoretic approach and initial results},
url = {http://dx.doi.org/10.1117/12.2306000},
year = {2018}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - With the emergence of Internet of Things (IoT) and edge computing applications, data is often generated by sensors and end users at the network edge, and decisions are made using these collected data. Edge devices often require cloud services in order to perform intensive inference tasks. Consequently, the inference of deep neural network (DNN) model is often partitioned between the edge and the cloud. In this case, the edge device performs inference up to an intermediate layer of the DNN, and offloads the output features to the cloud for the inference of the remaining of the network. Partitioning a DNN can help to improve energy efficiency but also rises some privacy concerns. The cloud platform can recover part of the raw data using intermediate results of the inference task. Recently, studies have also quantified an information theoretic trade-off between compression and prediction in DNNs. In this paper, we conduct a simple experiment to understand to which extent is it possible to reconstruct the raw data given the output of an intermediate layer, in other words, to which extent do we leak private information when sending the output of an intermediate layer to the cloud. We also present an overview of mutual-information based studies of DNN, to help understand information leakage and some potential ways to make distributed inference more secure.
AU - Tuor,T
AU - Wang,S
AU - Leung,KK
AU - Ko,BJ
DO - 10.1117/12.2306000
PB - Proceedings of SPIE
PY - 2018///
SN - 0277-786X
TI - Understanding information leakage of distributed inference with deep neural networks: Overview of information theoretic approach and initial results
UR - http://dx.doi.org/10.1117/12.2306000
UR - http://gateway.webofknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcApp=PARTNER_APP&SrcAuth=LinksAMR&KeyUT=WOS:000453766700012&DestLinkType=FullRecord&DestApp=ALL_WOS&UsrCustomerID=1ba7043ffcc86c417c072aa74d649202
UR - http://hdl.handle.net/10044/1/69238
ER -