Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@unpublished{Paudice,
author = {Paudice, A and Muñoz-González, L and Gyorgy, A and Lupu, EC},
title = {Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection},
url = {http://arxiv.org/abs/1802.03041v1},
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Machine learning has become an important component for many systems andapplications including computer vision, spam filtering, malware and networkintrusion detection, among others. Despite the capabilities of machine learningalgorithms to extract valuable information from data and produce accuratepredictions, it has been shown that these algorithms are vulnerable to attacks.Data poisoning is one of the most relevant security threats against machinelearning systems, where attackers can subvert the learning process by injectingmalicious samples in the training data. Recent work in adversarial machinelearning has shown that the so-called optimal attack strategies cansuccessfully poison linear classifiers, degrading the performance of the systemdramatically after compromising a small fraction of the training dataset. Inthis paper we propose a defence mechanism to mitigate the effect of theseoptimal poisoning attacks based on outlier detection. We show empirically thatthe adversarial examples generated by these attack strategies are quitedifferent from genuine points, as no detectability constrains are considered tocraft the attack. Hence, they can be detected with an appropriate pre-filteringof the training dataset.
AU - Paudice,A
AU - Muñoz-González,L
AU - Gyorgy,A
AU - Lupu,EC
TI - Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
UR - http://arxiv.org/abs/1802.03041v1
UR - http://hdl.handle.net/10044/1/57130
ER -