Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

249 results found

Carnerero-Cano J, Muñoz-González L, Spencer P, Lupu ECet al., 2021, Regularization can help mitigate poisoning attacks... with the right hyperparameters, Publisher: arXiv

Machine learning algorithms are vulnerable to poisoning attacks, where afraction of the training data is manipulated to degrade the algorithms'performance. We show that current approaches, which typically assume thatregularization hyperparameters remain constant, lead to an overly pessimisticview of the algorithms' robustness and of the impact of regularization. Wepropose a novel optimal attack formulation that considers the effect of theattack on the hyperparameters, modelling the attack as a \emph{minimax bileveloptimization problem}. This allows to formulate optimal attacks, selecthyperparameters and evaluate robustness under worst case conditions. We applythis formulation to logistic regression using $L_2$ regularization, empiricallyshow the limitations of previous strategies and evidence the benefits of using$L_2$ regularization to dampen the effect of poisoning attacks.

Working paper

Co KT, Muñoz-González L, Kanthan L, Glocker B, Lupu ECet al., 2021, Universal Adversarial Robustness of Texture and Shape-Biased Models, IEEE International Conference on Image Processing (ICIP)

Increasing shape-bias in deep neural networks has been shown to improverobustness to common corruptions and noise. In this paper we analyze theadversarial robustness of texture and shape-biased models to UniversalAdversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNNmodels with varying degrees of shape-based training. We find that shape-biasedmodels do not markedly improve adversarial robustness, and we show thatensembles of texture and shape-biased models can improve universal adversarialrobustness while maintaining strong performance.

Conference paper

Chizari H, Lupu EC, 2021, Extracting randomness from the trend of IPI for cryptographic operators in implantable medical devices, IEEE Transactions on Dependable and Secure Computing, Vol: 18, Pages: 875-888, ISSN: 1545-5971

Achieving secure communication between an Implantable Medical Device (IMD) inside the body and a gateway outside the body has showed its criticality with recent reports of hackings such as in St. Jude Medical's Implantable Cardiac Devices, Johnson and Johnson insulin pumps and vulnerabilities in brain Neuro-implants. The use of asymmetric cryptography in particular is not a practical solution for IMDs due to the scarce computational and power resources, symmetric key cryptography is preferred. One of the factors in security of a symmetric cryptographic system is to use a strong key for encryption. A solution to develop such a strong key without using extensive resources in an IMD, is to extract it from the body physiological signals. In order to have a strong enough key, the physiological signal must be a strong source of randomness and InterPulse Interval (IPI) has been advised to be such that. A strong randomness source should have five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). Nevertheless, for current proposed random extraction methods from IPI these conditions (mainly last three conditions) were not examined. In this study, firstly, we proposed a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a huge dataset of IPI values (almost 900,000,000 IPIs), we showed that IPI does not have conditions of Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI value, mathematically, is impossible. Thirdly, rather than using the value of IPI, we proposed the trend of IPI as a source for a new randomness extraction method named as Martingale Randomness Extraction from IPI (MRE-IPI). We evaluat

Journal article

Hau Z, Co KT, Demetriou S, Lupu Eet al., 2021, Object removal attacks on LiDAR-based 3D object detectors, NDSS 2021 Workshop, Publisher: Internet Society

LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects' RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Conference paper

Matachana A, Co KT, Munoz Gonzalez L, Martinez D, Lupu Eet al., 2021, Robustness and transferability of universal attacks on compressed models, AAAI 2021 Workshop, Publisher: AAAI

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Conference paper

Co KT, Muñoz-González L, Kanthan L, Lupu ECet al., 2021, Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class ofadversarial examples that exploit the systemic vulnerabilities and enablephysically realizable and robust attacks against Deep Neural Networks (DNNs).UAPs generalize across many different inputs; this leads to realistic andeffective attacks that can be applied at scale. In this paper we proposeHyperNeuron, an efficient and scalable algorithm that allows for the real-timedetection of UAPs by identifying suspicious neuron hyper-activations. Ourresults show the effectiveness of HyperNeuron on multiple tasks (imageclassification, object detection), against a wide variety of universal attacks,and in realistic scenarios, like perceptual ad-blocking and adversarialpatches. HyperNeuron is able to simultaneously detect both adversarial mask andpatch UAPs with comparable or better performance than existing UAP defenseswhilst introducing a significantly reduced latency of only 0.86 millisecondsper image. This suggests that many realistic and practical universal attackscan be reliably mitigated in real-time, which shows promise for the robustdeployment of machine learning systems.

Journal article

Hau Z, Co KT, Demetriou S, Lupu ECet al., 2021, Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles' (AVs) perception andtheir safe operations. Recent works have demonstrated that it is possible tospoof LiDAR return signals to elicit fake objects. In this work we demonstratehow the same physical capabilities can be used to mount a new, even moredangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim toforce 3D object detectors to fail. We leverage the default setting of LiDARsthat record a single return signal per direction to perturb point clouds in theregion of interest (RoI) of 3D objects. By injecting illegitimate points behindthe target object, we effectively shift points away from the target objects'RoIs. Our initial results using a simple random point selection strategy showthat the attack is effective in degrading the performance of commonly used 3Dobject detection models.

Journal article

Co KT, Rego DM, Lupu EC, 2021, Jacobian Regularization for Mitigating Universal Adversarial Perturbations, Pages: 202-213, ISSN: 0302-9743

Journal article

Matachana AG, Co KT, Muñoz-González L, Martinez D, Lupu ECet al., 2020, Robustness and transferability of universal attacks on compressed models, Publisher: arXiv

Neural network compression methods like pruning and quantization are veryeffective at efficiently deploying Deep Neural Networks (DNNs) on edge devices.However, DNNs remain vulnerable to adversarial examples-inconspicuous inputsthat are specifically designed to fool these models. In particular, UniversalAdversarial Perturbations (UAPs), are a powerful class of adversarial attackswhich create adversarial perturbations that can generalize across a large setof inputs. In this work, we analyze the effect of various compressiontechniques to UAP attacks, including different forms of pruning andquantization. We test the robustness of compressed models to white-box andtransfer attacks, comparing them with their uncompressed counterparts onCIFAR-10 and SVHN datasets. Our evaluations reveal clear differences betweenpruning methods, including Soft Filter and Post-training Pruning. We observethat UAP transfer attacks between pruned and full models are limited,suggesting that the systemic vulnerabilities across these models are different.This finding has practical implications as using different compressiontechniques can blunt the effectiveness of black-box transfer attacks. We showthat, in some scenarios, quantization can produce gradient-masking, giving afalse sense of security. Finally, our results suggest that conclusions aboutthe robustness of compressed models to UAP attacks is application dependent,observing different phenomena in the two datasets used in our experiments.

Working paper

Castiglione LM, Lupu EC, 2020, Hazard driven threat modelling for cyber physical systems, 2020 Joint Workshop on CPS&IoT Security and Privacy (CPSIOTSEC’20), Publisher: ACM

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We lever-age a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.

Conference paper

Karafili E, Valenza F, Chen Y, Lupu ECet al., 2020, Towards a Framework for Automatic Firewalls Configuration via Argumentation Reasoning

Firewalls have been widely used to protect not only small and local networks but also large enterprise networks. The configuration of firewalls is mainly done by network administrators, thus, it suffers from human errors. This paper aims to solve the network administrators' problem by introducing a formal approach that helps to configure centralized and distributed firewalls and automatically generate conflict-free firewall rules. We propose a novel framework, called ArgoFiCo, which is based on argumentation reasoning. Our framework automatically populates the firewalls of a network, given the network topology and the high-level requirements that represent how the network should behave. ArgoFiCo provides two strategies for firewall rules distribution.

Conference paper

Karafili E, Wang L, Lupu E, 2020, An argumentation-based reasoner to assist digital investigation and attribution of cyber-attacks, DFRWS EU, Publisher: Elsevier, Pages: 1-9, ISSN: 2666-2817

We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security coun- termeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who per- formed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions.

Conference paper

Hau Z, Demetriou S, Muñoz-González L, Lupu ECet al., 2020, Shadow-Catcher: Looking Into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing

LiDAR-driven 3D sensing allows new generations of vehicles to achieveadvanced levels of situation awareness. However, recent works have demonstratedthat physical adversaries can spoof LiDAR return signals and deceive 3D objectdetectors to erroneously detect "ghost" objects. Existing defenses are eitherimpractical or focus only on vehicles. Unfortunately, it is easier to spoofsmaller objects such as pedestrians and cyclists, but harder to defend againstand can have worse safety implications. To address this gap, we introduceShadow-Catcher, a set of new techniques embodied in an end-to-end prototype todetect both large and small ghost object attacks on 3D detectors. Wecharacterize a new semantically meaningful physical invariant (3D shadows)which Shadow-Catcher leverages for validating objects. Our evaluation on theKITTI dataset shows that Shadow-Catcher consistently achieves more than 94%accuracy in identifying anomalous shadows for vehicles, pedestrians, andcyclists, while it remains robust to a novel class of strong "invalidation"attacks targeting the defense system. Shadow-Catcher can achieve real-timedetection, requiring only between 0.003s-0.021s on average to process an objectin a 3D point cloud on commodity hardware and achieves a 2.17x speedup comparedto prior work

Journal article

Carnerero-Cano J, Mu noz-González L, Spencer P, Lupu ECet al., 2020, Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Other

Co KT, Munoz Gonzalez L, de Maupeou S, Lupu Eet al., 2019, Procedural noise adversarial examples for black-box attacks on deep neural networks, 26th ACM Conference on Computer and Communications Security, Publisher: ACM, Pages: 275-289

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Conference paper

Muñoz-González L, Lupu EC, 2019, The security of machine learning systems, AI in Cybersecurity, Publisher: Springer, Pages: 47-79

© Springer Nature Switzerland AG 2019. Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.

Book chapter

Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient attack countermeasure selection accounting for recovery and action costs, the 14th International Conference, Publisher: ACM Press

The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender.We consider a networked system providing services whose functionality depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defense, it may be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and performs well in comparison to alternatives.

Conference paper

Hau Z, Lupu EC, 2019, Exploiting correlations to detect false data injections in low-density wireless sensor networks, Cyber-Physical System Security Workshop, Publisher: ACM Press

We propose a novel framework to detect false data injections in a low-density sensor environment with heterogeneous sensor data. The proposed detection algorithm learns how each sensor's data correlates within the sensor network, and false data is identified by exploiting the anomalies in these correlations. When a large number of sensors measuring homogeneous data are deployed, data correlations in space at a fixed snapshot in time could be used as as basis to detect anomalies. Exploiting disruptions in correlations when false data is injected has been used in a high-density sensor setting and proven to be effective. With increasing adoption of sensor deployments in low-density setting, there is a need to develop detection techniques for these applications. However, with constraints on the number of sensors and different data types, we propose the use of temporal correlations across the heterogeneous data to determine the authenticity of the reported data. We also provide an adversarial model that utilizes a graphical method to devise complex attack strategies where an attacker injects coherent false data in multiple sensors to provide a false representation of the physical state of the system with the aim of subverting detection. This allows us to test the detection algorithm and assess its performance in improving the resilience of the sensor network against data integrity attacks.

Conference paper

Spanaki K, Gürgüç Z, Mulligan C, Lupu ECet al., 2019, Organizational cloud security and control: a proactive approach, Information Technology and People, Vol: 32, Pages: 516-537, ISSN: 0959-3845

PurposeThe purpose of this paper is to unfold the perceptions around additional security in cloud environments by highlighting the importance of controlling mechanisms as an approach to the ethical use of the systems. The study focuses on the effects of the controlling mechanisms in maintaining an overall secure position for the cloud and the mediating role of the ethical behavior in this relationship.Design/methodology/approachA case study was conducted, examining the adoption of managed cloud security services as a means of control, as well as a large-scale survey with the views of IT decision makers about the effects of such adoption to the overall cloud security.FindingsThe findings indicate that there is indeed a positive relationship between the adoption of controlling mechanisms and the maintenance of overall cloud security, which increases when the users follow an ethical behavior in the use of the cloud. A framework based on the findings is built suggesting a research agenda for the future and a conceptualization of the field.Research limitations/implicationsOne of the major limitations of the study is the fact that the data collection was based on the perceptions of IT decision makers from a cross-section of industries; however the proposed framework should also be examined in industry-specific context. Although the firm size was indicated as a high influencing factor, it was not considered for this study, as the data collection targeted a range of organizations from various sizes.Originality/valueThis study extends the research of IS security behavior based on the notion that individuals (clients and providers of cloud infrastructure) are protecting something separate from themselves, in a cloud-based environment, sharing responsibility and trust with their peers. The organization in this context is focusing on managed security solutions as a proactive measurement to preserve cloud security in cloud environments.

Journal article

Co KT, Munoz Gonzalez L, Lupu E, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise, ICML 2019 Workshop

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.

Conference paper

Collinge G, Lupu E, Munoz Gonzalez L, 2019, Defending against Poisoning Attacks in Online Learning Settings, European Symposium on Artificial Neural Networks, Publisher: ESANN

Machine learning systems are vulnerable to data poisoning, acoordinated attack where a fraction of the training dataset is manipulatedby an attacker to subvert learning. In this paper we first formulate an optimal attack strategy against online learning classifiers to assess worst-casescenarios. We also propose two defence mechanisms to mitigate the effectof online poisoning attacks by analysing the impact of the data points inthe classifier and by means of an adaptive combination of machine learning classifiers with different learning rates. Our experimental evaluationsupports the usefulness of our proposed defences to mitigate the effect ofpoisoning attacks in online learning settings.

Conference paper

Munoz Gonzalez L, Sgandurra D, Barrere Cambrun M, Lupu ECet al., 2019, Exact Inference Techniques for the Analysis of Bayesian Attack Graphs, IEEE Transactions on Dependable and Secure Computing, Vol: 16, Pages: 231-244, ISSN: 1941-0018

Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attacker's behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.

Journal article

Paudice A, Muñoz-González L, Lupu EC, 2019, Label sanitization against label flipping poisoning attacks, Nemesis'18. Workshop in Recent Advances in Adversarial Machine Learning, Publisher: Springer Verlag, Pages: 5-15, ISSN: 0302-9743

Many machine learning systems rely on data collected in thewild from untrusted sources, exposing the learning algorithms to datapoisoning. Attackers can inject malicious data in the training datasetto subvert the learning process, compromising the performance of thealgorithm producing errors in a targeted or an indiscriminate way. Labelflipping attacks are a special case of data poisoning, where the attackercan control the labels assigned to a fraction of the training points. Evenif the capabilities of the attacker are constrained, these attacks havebeen shown to be effective to significantly degrade the performance ofthe system. In this paper we propose an efficient algorithm to performoptimal label flipping poisoning attacks and a mechanism to detect andrelabel suspicious data points, mitigating the effect of such poisoningattacks.

Conference paper

Steiner RV, Lupu E, 2019, Towards more practical software-based attestation, Computer Networks, Vol: 149, Pages: 43-55, ISSN: 1389-1286

Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).

Journal article

Karafili E, Spanaki K, Lupu E, 2019, Access Control and Quality Attributes of Open Data: Applications and Techniques, Workshop on Quality of Open Data, Publisher: Springer Verlag (Germany), Pages: 603-614, ISSN: 1865-1348

Open Datasets provide one of the most popular ways to ac- quire insight and information about individuals, organizations and multiple streams of knowledge. Exploring Open Datasets by applying comprehensive and rigorous techniques for data processing can provide the ground for innovation and value for everyone if the data are handled in a legal and controlled way. In our study, we propose an argumentation and abductive reasoning approach for data processing which is based on the data quality background. Explicitly, we draw on the literature of data management and quality for the attributes of the data, and we extend this background through the development of our techniques. Our aim is to provide herein a brief overview of the data quality aspects, as well as indicative applications and examples of our approach. Our overall objective is to bring serious intent and propose a structured way for access control and processing of open data with a focus on the data quality aspects.

Conference paper

Co KT, Muñoz-González L, Lupu EC, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive toUniversal Adversarial Perturbations (UAPs): input-agnostic perturbations thatfool a model on large portions of a dataset. These UAPs exhibit interestingvisual patterns, but this phenomena is, as yet, poorly understood. Our workshows that visually similar procedural noise patterns also act as UAPs. Inparticular, we demonstrate that different DCN architectures are sensitive toGabor noise patterns. This behaviour, its causes, and implications deservefurther in-depth study.

Journal article

Muñoz-González L, Co KT, Lupu EC, 2019, Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning models atscale with many participants whilst preserving the privacy of their datasets.Standard federated learning techniques are vulnerable to Byzantine failures,biased local datasets, and poisoning attacks. In this paper we introduceAdaptive Federated Averaging, a novel algorithm for robust federated learningthat is designed to detect failures, attacks, and bad updates provided byparticipants in a collaborative model. We propose a Hidden Markov Model tomodel and learn the quality of model updates provided by each participantduring training. In contrast to existing robust federated learning schemes, wepropose a robust aggregation rule that detects and discards bad or maliciouslocal model updates at each training iteration. This includes a mechanism thatblocks unwanted participants, which also increases the computational andcommunication efficiency. Our experimental evaluation on 4 real datasets showthat our algorithm is significantly more robust to faulty, noisy and maliciousparticipants, whilst being computationally more efficient than otherstate-of-the-art robust federated learning methods such as Multi-KRUM andcoordinate-wise median.

Journal article

Karafili E, Wang L, Lupu EC, 2019, An Argumentation-Based Approach to Assist in the Investigation and Attribution of Cyber-Attacks.

Working paper

Munoz Gonzalez L, Lupu E, 2019, The Security of Machine Learning Systems, AI in Cybersecurity, Editors: Sikos

Book chapter

Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient Attack Countermeasure Selection Accounting for Recovery and Action Costs., Pages: 3:1-3:1

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00155030&limit=30&person=true