Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems



e.c.lupu Website




564Huxley BuildingSouth Kensington Campus






BibTex format

author = {Paudice, A and Muñoz-González, L and Gyorgy, A and Lupu, EC},
title = {Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection},
url = {},

RIS format (EndNote, RefMan)

AB - Machine learning has become an important component for many systems andapplications including computer vision, spam filtering, malware and networkintrusion detection, among others. Despite the capabilities of machine learningalgorithms to extract valuable information from data and produce accuratepredictions, it has been shown that these algorithms are vulnerable to attacks.Data poisoning is one of the most relevant security threats against machinelearning systems, where attackers can subvert the learning process by injectingmalicious samples in the training data. Recent work in adversarial machinelearning has shown that the so-called optimal attack strategies cansuccessfully poison linear classifiers, degrading the performance of the systemdramatically after compromising a small fraction of the training dataset. Inthis paper we propose a defence mechanism to mitigate the effect of theseoptimal poisoning attacks based on outlier detection. We show empirically thatthe adversarial examples generated by these attack strategies are quitedifferent from genuine points, as no detectability constrains are considered tocraft the attack. Hence, they can be detected with an appropriate pre-filteringof the training dataset.
AU - Paudice,A
AU - Muñoz-González,L
AU - Gyorgy,A
AU - Lupu,EC
TI - Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection
UR -
UR -
ER -