Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

263 results found

Carnerero Cano J, Munoz Gonzalez L, Spencer P, Lupu ECet al., 2023, Hyperparameter learning under data poisoning: analysis of the influence of regularization via multiobjective bilevel optimization, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance. Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios. We show that current approaches, which typically assume that hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem. This allows to formulate optimal attacks, learn hyperparameters and evaluate robustness under worst-case conditions. We apply this attack formulation to several ML classifiers using L₂ and L₁ regularization. Our evaluation on multiple datasets shows that choosing an "a priori" constant value for the regularization hyperparameter can be detrimental to the performance of the algorithms. This confirms the limitations of previous strategies and evidences the benefits of using L₂ and L₁ regularization to dampen the effect of poisoning attacks, when hyperparameters are learned using a small trusted dataset. Additionally, our results show that the use of regularization plays an important robustness and stability role in complex models, such as Deep Neural Networks, where the attacker can have more flexibility to manipulate the decision boundary.

Journal article

Carnerero Cano J, Munoz Gonzalez L, Spencer P, Lupu ECet al., 2023, Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227

Journal article

Soikkeli J, Casale G, Munoz Gonzalez L, Lupu ECet al., 2023, Redundancy planning for cost efficient resilience to cyber attacks, IEEE Transactions on Dependable and Secure Computing, Vol: 20, Pages: 1154-1168, ISSN: 1545-5971

We investigate the extent to which redundancy (including with diversity) can help mitigate the impact of cyber attacks that aim to reduce system performance. Using analytical techniques, we estimate impacts, in terms of monetary costs, of penalties from breaching Service Level Agreements (SLAs), and find optimal resource allocations to minimize the overall costs arising from attacks. Our approach combines attack impact analysis, based on performance modeling using queueing networks, with an attack model based on attack graphs. We evaluate our approach using a case study of a website, and show how resource redundancy and diversity can improve the resilience of a system by reducing the likelihood of a fully disruptive attack. We find that the cost-effectiveness of redundancy depends on the SLA terms, the probability of attack detection, the time to recover, and the cost of maintenance. In our case study, redundancy with diversity achieved a saving of up to around 50 percent in expected attack costs relative to no redundancy. The overall benefit over time depends on how the saving during attacks compares to the added maintenance costs due to redundancy.

Journal article

Castiglione L, Hau Z, Ge P, Co K, Munoz Gonzalez L, Teng F, Lupu Eet al., 2022, HA-grid: security aware hazard analysis for smart grids, IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, Publisher: IEEE, Pages: 446-452

Attacks targeting smart grid infrastructures can result in the disruptions of power supply as well as damages to costly equipment, with significant impact on safety as well as on end-consumers. It is therefore of essence to identify attack paths in the infrastructure that lead to safety violations and todetermine critical components that must be protected. In this paper, we introduce a methodology (HA-Grid) that incorporates both safety and security modelling of smart grid infrastructure to analyse the impact of cyber threats on the safety of smart grid infrastructures. HA-Grid is applied on a smart grid test-bed to identify attack paths that lead to safety hazards, and todetermine the common nodes in these attack paths as critical components that must be protected.

Conference paper

Valenza F, Karafili E, Steiner RV, Lupu ECet al., 2022, A hybrid threat model for smart systems, IEEE Transactions on Dependable and Secure Computing, Pages: 1-14, ISSN: 1545-5971

Cyber-physical systems and their smart components have a pervasive presence in all our daily activities. Unfortunately, identifying the potential threats and issues in these systems and selecting enough protection is challenging given that such environments combine human, physical and cyber aspects to the system design and implementation. Current threat models and analysis do not take into consideration all three aspects of the analyzed system, how they can introduce new vulnerabilities or protection measures to each other. In this work, we introduce a novel threat model for cyber-physical systems that combines the cyber, physical, and human aspects. Our model represents the system's components relations and security properties by taking into consideration these three aspects. Together with the threat model we also propose a threat analysis method that allows understanding the security state of the system's components. The threat model and the threat analysis have been implemented into an automatic tool, called TAMELESS, that automatically analyzes threats to the system, verifies its security properties, and generates a graphical representation, useful for security architects to identify the proper prevention/mitigation solutions. We show and prove the use of our threat model and analysis with three cases studies from different sectors.

Journal article

Hau Z, Demetriou S, Muñoz-González L, Lupu ECet al., 2022, Shadow-catcher: looking into shadows to detect ghost objects in autonomous vehicle 3D sensing, ESORICS, Publisher: Springer International Publishing, Pages: 691-711

LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost" objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003 s–0.021 s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work.

Conference paper

Co KT, Martinez-Rego D, Hau Z, Lupu ECet al., 2022, Jacobian ensembles improve robustness trade-offs to adversarial attacks, Artificial Neural Networks and Machine Learning - ICANN 2022, Publisher: Springer, Pages: 680-691, ISSN: 0302-9743

Deep neural networks have become an integral part of our software infrastructure and are being deployed in many widely-used and safety-critical applications. However, their integration into many systems also brings with it the vulnerability to test time attacks in the form of Universal Adversarial Perturbations (UAPs). UAPs are a class of perturbations that when applied to any input causes model misclassification. Although there is an ongoing effort to defend models against these adversarial attacks, it is often difficult to reconcile the trade-offs in model accuracy and robustness to adversarial attacks. Jacobian regularization has been shown to improve the robustness of models against UAPs, whilst model ensembles have been widely adopted to improve both predictive performance and model robustness. In this work, we propose a novel approach, Jacobian Ensembles – a combination of Jacobian regularization and model ensembles to significantly increase the robustness against UAPs whilst maintaining or improving model accuracy. Our results show that Jacobian Ensembles achieves previously unseen levels of accuracy and robustness, greatly improving over previous methods that tend to skew towards only either accuracy or robustness.

Conference paper

Hau Z, Demetriou S, Lupu EC, 2022, Using 3D Shadows to Detect Object Hiding Attacks on Autonomous Vehicle Perception, 43rd IEEE Symposium on Security and Privacy (SP), Publisher: IEEE COMPUTER SOC, Pages: 229-235, ISSN: 2639-7862

Conference paper

Soikkeli J, Perner C, Lupu E, 2021, Analyzing the viability of UAV missions facing cyber attacks, 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Publisher: IEEE

With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehicle scan be affected by cyber attacks, impacting the viability of their missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network. We apply our method to a case study of a survey mission in a wildfire firefighting scenario. Within this context, we show how our method can help quantify the expected mission performance impact from an attack and determine if the mission can remain viable under various attack situations. Our method can be used both in the planning of the mission and for decision making during mission operation.Our approach to modeling attack progression and impact analysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be used interchangeably towards the same objective

Conference paper

Co KT, Rego DM, Lupu EC, 2021, Jacobian regularization for mitigating universal adversarial perturbations, Lecture Notes in Computer Science, Vol: 12894, Pages: 202-213, ISSN: 0302-9743

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Journal article

Co KT, Muñoz-González L, Kanthan L, Glocker B, Lupu ECet al., 2021, Universal adversarial robustness of texture and shape-biased models, IEEE International Conference on Image Processing (ICIP)

Increasing shape-bias in deep neural networks has been shown to improverobustness to common corruptions and noise. In this paper we analyze theadversarial robustness of texture and shape-biased models to UniversalAdversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNNmodels with varying degrees of shape-based training. We find that shape-biasedmodels do not markedly improve adversarial robustness, and we show thatensembles of texture and shape-biased models can improve universal adversarialrobustness while maintaining strong performance.

Conference paper

Carnerero-Cano J, Muñoz-González L, Spencer P, Lupu ECet al., 2021, Regularization can help mitigate poisoning attacks... with the right hyperparameters, Publisher: arXiv

Machine learning algorithms are vulnerable to poisoning attacks, where afraction of the training data is manipulated to degrade the algorithms'performance. We show that current approaches, which typically assume thatregularization hyperparameters remain constant, lead to an overly pessimisticview of the algorithms' robustness and of the impact of regularization. Wepropose a novel optimal attack formulation that considers the effect of theattack on the hyperparameters, modelling the attack as a \emph{minimax bileveloptimization problem}. This allows to formulate optimal attacks, selecthyperparameters and evaluate robustness under worst case conditions. We applythis formulation to logistic regression using $L_2$ regularization, empiricallyshow the limitations of previous strategies and evidence the benefits of using$L_2$ regularization to dampen the effect of poisoning attacks.

Working paper

Co KT, Muñoz-González L, Kanthan L, Lupu ECet al., 2021, Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class ofadversarial examples that exploit the systemic vulnerabilities and enablephysically realizable and robust attacks against Deep Neural Networks (DNNs).UAPs generalize across many different inputs; this leads to realistic andeffective attacks that can be applied at scale. In this paper we proposeHyperNeuron, an efficient and scalable algorithm that allows for the real-timedetection of UAPs by identifying suspicious neuron hyper-activations. Ourresults show the effectiveness of HyperNeuron on multiple tasks (imageclassification, object detection), against a wide variety of universal attacks,and in realistic scenarios, like perceptual ad-blocking and adversarialpatches. HyperNeuron is able to simultaneously detect both adversarial mask andpatch UAPs with comparable or better performance than existing UAP defenseswhilst introducing a significantly reduced latency of only 0.86 millisecondsper image. This suggests that many realistic and practical universal attackscan be reliably mitigated in real-time, which shows promise for the robustdeployment of machine learning systems.

Journal article

Chizari H, Lupu EC, 2021, Extracting randomness from the trend of IPI for cryptographic operators in implantable medical devices, IEEE Transactions on Dependable and Secure Computing, Vol: 18, Pages: 875-888, ISSN: 1545-5971

Achieving secure communication between an Implantable Medical Device (IMD) inside the body and a gateway outside the body has showed its criticality with recent reports of hackings such as in St. Jude Medical's Implantable Cardiac Devices, Johnson and Johnson insulin pumps and vulnerabilities in brain Neuro-implants. The use of asymmetric cryptography in particular is not a practical solution for IMDs due to the scarce computational and power resources, symmetric key cryptography is preferred. One of the factors in security of a symmetric cryptographic system is to use a strong key for encryption. A solution to develop such a strong key without using extensive resources in an IMD, is to extract it from the body physiological signals. In order to have a strong enough key, the physiological signal must be a strong source of randomness and InterPulse Interval (IPI) has been advised to be such that. A strong randomness source should have five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). Nevertheless, for current proposed random extraction methods from IPI these conditions (mainly last three conditions) were not examined. In this study, firstly, we proposed a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a huge dataset of IPI values (almost 900,000,000 IPIs), we showed that IPI does not have conditions of Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI value, mathematically, is impossible. Thirdly, rather than using the value of IPI, we proposed the trend of IPI as a source for a new randomness extraction method named as Martingale Randomness Extraction from IPI (MRE-IPI). We evaluat

Journal article

Hau Z, Co KT, Demetriou S, Lupu Eet al., 2021, Object removal attacks on LiDAR-based 3D object detectors, NDSS 2021 Workshop, Publisher: Internet Society

LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects' RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Conference paper

Matachana A, Co KT, Munoz Gonzalez L, Martinez D, Lupu Eet al., 2021, Robustness and transferability of universal attacks on compressed models, AAAI 2021 Workshop, Publisher: AAAI

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Conference paper

Hau Z, Co KT, Demetriou S, Lupu ECet al., 2021, Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles' (AVs) perception andtheir safe operations. Recent works have demonstrated that it is possible tospoof LiDAR return signals to elicit fake objects. In this work we demonstratehow the same physical capabilities can be used to mount a new, even moredangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim toforce 3D object detectors to fail. We leverage the default setting of LiDARsthat record a single return signal per direction to perturb point clouds in theregion of interest (RoI) of 3D objects. By injecting illegitimate points behindthe target object, we effectively shift points away from the target objects'RoIs. Our initial results using a simple random point selection strategy showthat the attack is effective in degrading the performance of commonly used 3Dobject detection models.

Conference paper

Matachana AG, Co KT, Muñoz-González L, Martinez D, Lupu ECet al., 2020, Robustness and transferability of universal attacks on compressed models, Publisher: arXiv

Neural network compression methods like pruning and quantization are veryeffective at efficiently deploying Deep Neural Networks (DNNs) on edge devices.However, DNNs remain vulnerable to adversarial examples-inconspicuous inputsthat are specifically designed to fool these models. In particular, UniversalAdversarial Perturbations (UAPs), are a powerful class of adversarial attackswhich create adversarial perturbations that can generalize across a large setof inputs. In this work, we analyze the effect of various compressiontechniques to UAP attacks, including different forms of pruning andquantization. We test the robustness of compressed models to white-box andtransfer attacks, comparing them with their uncompressed counterparts onCIFAR-10 and SVHN datasets. Our evaluations reveal clear differences betweenpruning methods, including Soft Filter and Post-training Pruning. We observethat UAP transfer attacks between pruned and full models are limited,suggesting that the systemic vulnerabilities across these models are different.This finding has practical implications as using different compressiontechniques can blunt the effectiveness of black-box transfer attacks. We showthat, in some scenarios, quantization can produce gradient-masking, giving afalse sense of security. Finally, our results suggest that conclusions aboutthe robustness of compressed models to UAP attacks is application dependent,observing different phenomena in the two datasets used in our experiments.

Working paper

Castiglione LM, Lupu EC, 2020, Hazard driven threat modelling for cyber physical systems, 2020 Joint Workshop on CPS&IoT Security and Privacy (CPSIOTSEC’20), Publisher: ACM, Pages: 13-24

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We lever-age a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.

Conference paper

Karafili E, Wang L, Lupu E, 2020, An argumentation-based reasoner to assist digital investigation and attribution of cyber-attacks, DFRWS EU, Publisher: Elsevier, Pages: 1-9, ISSN: 2666-2817

We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security coun- termeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who per- formed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions.

Conference paper

Karafili E, Valenza F, Chen Y, Lupu ECet al., 2020, Towards a Framework for Automatic Firewalls Configuration via Argumentation Reasoning, IEEE/IFIP Network Operations and Management Symposium (NOMS), Publisher: IEEE, ISSN: 1542-1201

Conference paper

Carnerero-Cano J, Mu noz-González L, Spencer P, Lupu ECet al., 2020, Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Other

Co KT, Munoz Gonzalez L, de Maupeou S, Lupu Eet al., 2019, Procedural noise adversarial examples for black-box attacks on deep neural networks, 26th ACM Conference on Computer and Communications Security, Publisher: ACM, Pages: 275-289

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Conference paper

Muñoz-González L, Lupu EC, 2019, The security of machine learning systems, AI in Cybersecurity, Publisher: Springer, Pages: 47-79

© Springer Nature Switzerland AG 2019. Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.

Book chapter

Muñoz-González L, Co KT, Lupu EC, 2019, Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning models atscale with many participants whilst preserving the privacy of their datasets.Standard federated learning techniques are vulnerable to Byzantine failures,biased local datasets, and poisoning attacks. In this paper we introduceAdaptive Federated Averaging, a novel algorithm for robust federated learningthat is designed to detect failures, attacks, and bad updates provided byparticipants in a collaborative model. We propose a Hidden Markov Model tomodel and learn the quality of model updates provided by each participantduring training. In contrast to existing robust federated learning schemes, wepropose a robust aggregation rule that detects and discards bad or maliciouslocal model updates at each training iteration. This includes a mechanism thatblocks unwanted participants, which also increases the computational andcommunication efficiency. Our experimental evaluation on 4 real datasets showthat our algorithm is significantly more robust to faulty, noisy and maliciousparticipants, whilst being computationally more efficient than otherstate-of-the-art robust federated learning methods such as Multi-KRUM andcoordinate-wise median.

Journal article

Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient attack countermeasure selection accounting for recovery and action costs, the 14th International Conference, Publisher: ACM Press

The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender.We consider a networked system providing services whose functionality depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defense, it may be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and performs well in comparison to alternatives.

Conference paper

Hau Z, Lupu EC, 2019, Exploiting correlations to detect false data injections in low-density wireless sensor networks, Cyber-Physical System Security Workshop, Publisher: ACM Press

We propose a novel framework to detect false data injections in a low-density sensor environment with heterogeneous sensor data. The proposed detection algorithm learns how each sensor's data correlates within the sensor network, and false data is identified by exploiting the anomalies in these correlations. When a large number of sensors measuring homogeneous data are deployed, data correlations in space at a fixed snapshot in time could be used as as basis to detect anomalies. Exploiting disruptions in correlations when false data is injected has been used in a high-density sensor setting and proven to be effective. With increasing adoption of sensor deployments in low-density setting, there is a need to develop detection techniques for these applications. However, with constraints on the number of sensors and different data types, we propose the use of temporal correlations across the heterogeneous data to determine the authenticity of the reported data. We also provide an adversarial model that utilizes a graphical method to devise complex attack strategies where an attacker injects coherent false data in multiple sensors to provide a false representation of the physical state of the system with the aim of subverting detection. This allows us to test the detection algorithm and assess its performance in improving the resilience of the sensor network against data integrity attacks.

Conference paper

Co KT, Muñoz-González L, Lupu EC, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive toUniversal Adversarial Perturbations (UAPs): input-agnostic perturbations thatfool a model on large portions of a dataset. These UAPs exhibit interestingvisual patterns, but this phenomena is, as yet, poorly understood. Our workshows that visually similar procedural noise patterns also act as UAPs. Inparticular, we demonstrate that different DCN architectures are sensitive toGabor noise patterns. This behaviour, its causes, and implications deservefurther in-depth study.

Journal article

Spanaki K, Gürgüç Z, Mulligan C, Lupu ECet al., 2019, Organizational cloud security and control: a proactive approach, Information Technology and People, Vol: 32, Pages: 516-537, ISSN: 0959-3845

PurposeThe purpose of this paper is to unfold the perceptions around additional security in cloud environments by highlighting the importance of controlling mechanisms as an approach to the ethical use of the systems. The study focuses on the effects of the controlling mechanisms in maintaining an overall secure position for the cloud and the mediating role of the ethical behavior in this relationship.Design/methodology/approachA case study was conducted, examining the adoption of managed cloud security services as a means of control, as well as a large-scale survey with the views of IT decision makers about the effects of such adoption to the overall cloud security.FindingsThe findings indicate that there is indeed a positive relationship between the adoption of controlling mechanisms and the maintenance of overall cloud security, which increases when the users follow an ethical behavior in the use of the cloud. A framework based on the findings is built suggesting a research agenda for the future and a conceptualization of the field.Research limitations/implicationsOne of the major limitations of the study is the fact that the data collection was based on the perceptions of IT decision makers from a cross-section of industries; however the proposed framework should also be examined in industry-specific context. Although the firm size was indicated as a high influencing factor, it was not considered for this study, as the data collection targeted a range of organizations from various sizes.Originality/valueThis study extends the research of IS security behavior based on the notion that individuals (clients and providers of cloud infrastructure) are protecting something separate from themselves, in a cloud-based environment, sharing responsibility and trust with their peers. The organization in this context is focusing on managed security solutions as a proactive measurement to preserve cloud security in cloud environments.

Journal article

Co KT, Munoz Gonzalez L, Lupu E, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise, ICML 2019 Workshop

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00155030&limit=30&person=true