Imperial College London


Faculty of EngineeringDepartment of Computing

Chair in Machine Learning and Pattern Recognition



m.bronstein Website




569Huxley BuildingSouth Kensington Campus






BibTex format

author = {Svoboda, J and Masci, J and Monti, F and Bronstein, MM and Guibas, L},
title = {Peernets: Exploiting peer wisdom against adversarial attacks},
year = {2019}

RIS format (EndNote, RefMan)

AB - © 7th International Conference on Learning Representations, ICLR 2019. All Rights Reserved. Deep learning systems have become ubiquitous in many aspects of our lives. Unfortunately, it has been shown that such systems are vulnerable to adversarial attacks, making them prone to potential unlawful uses. Designing deep neural networks that are robust to adversarial attacks is a fundamental step in making such systems safer and deployable in a broader variety of applications (e.g. autonomous driving), but more importantly is a necessary step to design novel and more advanced architectures built on new computational paradigms rather than marginally building on the existing ones. In this paper we introduce PeerNets, a novel family of convolutional networks alternating classical Euclidean convolutions with graph convolutions to harness information from a graph of peer samples. This results in a form of non-local forward propagation in the model, where latent features are conditioned on the global structure induced by the graph, that is up to 3× more robust to a variety of white- and black-box adversarial attacks compared to conventional architectures with almost no drop in accuracy.
AU - Svoboda,J
AU - Masci,J
AU - Monti,F
AU - Bronstein,MM
AU - Guibas,L
PY - 2019///
TI - Peernets: Exploiting peer wisdom against adversarial attacks
ER -