Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Muñoz-González:2017:10.1145/3128572.3140451,
author = {Muñoz-González, L and Biggio, B and Demontis, A and Paudice, A and Wongrassamee, V and Lupu, EC and Roli, F},
doi = {10.1145/3128572.3140451},
pages = {27--38},
title = {Towards poisoning of deep learning algorithms with back-gradient optimization},
url = {http://dx.doi.org/10.1145/3128572.3140451},
year = {2017}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - © 2017 Association for Computing Machinery. A number of online services nowadays rely upon machine learning to extract valuable information from data collected in the wild. This exposes learning algorithms to the threat of data poisoning, i.e., a coordinate attack in which a fraction of the training data is controlled by the attacker and manipulated to subvert the learning process. To date, these attacks have been devised only against a limited class of binary learning algorithms, due to the inherent complexity of the gradient-based procedure used to optimize the poisoning points (a.k.a. adversarial training examples). In this work, we first extend the definition of poisoning attacks to multiclass problems. We then propose a novel poisoning algorithm based on the idea of back-gradient optimization, i.e., to compute the gradient of interest through automatic differentiation, while also reversing the learning procedure to drastically reduce the attack complexity. Compared to current poisoning strategies, our approach is able to target a wider class of learning algorithms, trained with gradient-based procedures, including neural networks and deep learning architectures. We empirically evaluate its effectiveness on several application examples, including spam filtering, malware detection, and handwritten digit recognition. We finally show that, similarly to adversarial test examples, adversarial training examples can also be transferred across different learning algorithms.
AU - Muñoz-González,L
AU - Biggio,B
AU - Demontis,A
AU - Paudice,A
AU - Wongrassamee,V
AU - Lupu,EC
AU - Roli,F
DO - 10.1145/3128572.3140451
EP - 38
PY - 2017///
SP - 27
TI - Towards poisoning of deep learning algorithms with back-gradient optimization
UR - http://dx.doi.org/10.1145/3128572.3140451
UR - http://hdl.handle.net/10044/1/54926
ER -