Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

272 results found

Valenza F, Karafili E, Steiner RV, Lupu ECet al., 2023, A hybrid threat model for smart systems, IEEE Transactions on Dependable and Secure Computing, Vol: 20, Pages: 4403-4417, ISSN: 1545-5971

Cyber-physical systems and their smart components have a pervasive presence in all our daily activities. Unfortunately, identifying the potential threats and issues in these systems and selecting enough protection is challenging given that such environments combine human, physical and cyber aspects to the system design and implementation. Current threat models and analysis do not take into consideration all three aspects of the analyzed system, how they can introduce new vulnerabilities or protection measures to each other. In this work, we introduce a novel threat model for cyber-physical systems that combines the cyber, physical, and human aspects. Our model represents the system's components relations and security properties by taking into consideration these three aspects. Together with the threat model we also propose a threat analysis method that allows understanding the security state of the system's components. The threat model and the threat analysis have been implemented into an automatic tool, called TAMELESS, that automatically analyzes threats to the system, verifies its security properties, and generates a graphical representation, useful for security architects to identify the proper prevention/mitigation solutions. We show and prove the use of our threat model and analysis with three cases studies from different sectors.

Journal article

Carnerero Cano J, Munoz Gonzalez L, Spencer P, Lupu ECet al., 2023, Hyperparameter learning under data poisoning: analysis of the influence of regularization via multiobjective bilevel optimization, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227

Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance. Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios. We show that current approaches, which typically assume that hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem. This allows to formulate optimal attacks, learn hyperparameters and evaluate robustness under worst-case conditions. We apply this attack formulation to several ML classifiers using L₂ and L₁ regularization. Our evaluation on multiple datasets shows that choosing an "a priori" constant value for the regularization hyperparameter can be detrimental to the performance of the algorithms. This confirms the limitations of previous strategies and evidences the benefits of using L₂ and L₁ regularization to dampen the effect of poisoning attacks, when hyperparameters are learned using a small trusted dataset. Additionally, our results show that the use of regularization plays an important robustness and stability role in complex models, such as Deep Neural Networks, where the attacker can have more flexibility to manipulate the decision boundary.

Journal article

Castiglione L, Lupu EC, 2023, Which attacks lead to hazards? Combining safety and security analysis for cyber-physical systems, IEEE Transactions on Dependable and Secure Computing, ISSN: 1545-5971

Cyber-Physical Systems (CPS) are exposed to a plethora of attacks and their attack surface is only increasing. However, whilst many attack paths are possible, only some can threaten the system's safety and potentially lead to loss of life. Identifying them is of essence. We propose a methodology and develop a tool-chain to systematically analyse and enumerate the attacks leading to safety violations. This is achieved by lazily combining threat modelling and safety analysis with formal verification and with attack graph analysis. We also identify the minimum sets of privileges that must be protected to preserve safety. We demonstrate the effectiveness of our methodology to discover threat scenarios by applying it to a Communication Based Train Control System. Our design choices emphasise compatibility with existing safety and security frameworks, whilst remaining agnostic to specific tools or attack graphs representations.

Journal article

Carnerero Cano J, Munoz Gonzalez L, Spencer P, Lupu ECet al., 2023, Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227

Journal article

Castiglione L, Lupu E, Stassen P, Perner CL, Pereira DP, De Carvalho Bertoli Get al., 2023, Don’t panic! Analysing the impact of attacks on the safety of flight management systems, 42nd DASC. 42nd Digital Avionics Systems Conference

Increased connectivity in modern aircraft also significantly increases the attack surface available to adversaries and the number of possible attack paths. It is therefore of essence to characterise the attacks that can impact safety. We present Cassandra , a novel methodology combining System Theoretic Process Analysis Security (STPA-Sec) with formal verification to automatically identify safety critical threat scenarios. Unlike previous methodologies for safety and security analysis, Cassandra leverages the integration with the aircraft architecture, together with the set of threats and the privileges required to execute them, to also identify safety critical attack paths. We employ Bayesian inference to compute the probability of success for the safety critical attacks found. We describe how Cassandra can be used in the system early design phase to reason about attack paths leading to safety critical threat scenarios and discuss how it can be further used to evaluate mitigation and assurance cases by reducing threat vectors and increasing safety. In particular, we apply Cassandra to analyse the safe operation of a Flight Management System (FMS) when the adversary tries to access safety critical information by compromising the device used as the Electronic Flight Bag (EFB). We evaluate the probability of successful attacks in three different scenarios: EFB available on pilot owned device, EFB available on airline controlled device with limited connectivity, and EFB available on aircraft only. While the outcome of Cassandra may be intuitive in this case, the example allows us to show how Cassandra improves automation and integration of safety and security analysis for modern avionic architectures, where complexity hinders intuition and manual analysis is laborious and error prone.

Conference paper

Soikkeli J, Casale G, Munoz Gonzalez L, Lupu ECet al., 2023, Redundancy planning for cost efficient resilience to cyber attacks, IEEE Transactions on Dependable and Secure Computing, Vol: 20, Pages: 1154-1168, ISSN: 1545-5971

We investigate the extent to which redundancy (including with diversity) can help mitigate the impact of cyber attacks that aim to reduce system performance. Using analytical techniques, we estimate impacts, in terms of monetary costs, of penalties from breaching Service Level Agreements (SLAs), and find optimal resource allocations to minimize the overall costs arising from attacks. Our approach combines attack impact analysis, based on performance modeling using queueing networks, with an attack model based on attack graphs. We evaluate our approach using a case study of a website, and show how resource redundancy and diversity can improve the resilience of a system by reducing the likelihood of a fully disruptive attack. We find that the cost-effectiveness of redundancy depends on the SLA terms, the probability of attack detection, the time to recover, and the cost of maintenance. In our case study, redundancy with diversity achieved a saving of up to around 50 percent in expected attack costs relative to no redundancy. The overall benefit over time depends on how the saving during attacks compares to the added maintenance costs due to redundancy.

Journal article

Maiti RR, Adepu S, Lupu E, 2023, ICCPS: Impact discovery using causal inference for cyber attacks in CPSs., CoRR, Vol: abs/2307.14161

Journal article

Castiglione L, Hau Z, Ge P, Co K, Munoz Gonzalez L, Teng F, Lupu Eet al., 2022, HA-grid: security aware hazard analysis for smart grids, IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, Publisher: IEEE, Pages: 446-452

Attacks targeting smart grid infrastructures can result in the disruptions of power supply as well as damages to costly equipment, with significant impact on safety as well as on end-consumers. It is therefore of essence to identify attack paths in the infrastructure that lead to safety violations and todetermine critical components that must be protected. In this paper, we introduce a methodology (HA-Grid) that incorporates both safety and security modelling of smart grid infrastructure to analyse the impact of cyber threats on the safety of smart grid infrastructures. HA-Grid is applied on a smart grid test-bed to identify attack paths that lead to safety hazards, and todetermine the common nodes in these attack paths as critical components that must be protected.

Conference paper

Hau Z, Demetriou S, Muñoz-González L, Lupu ECet al., 2022, Shadow-catcher: looking into shadows to detect ghost objects in autonomous vehicle 3D sensing, ESORICS, Publisher: Springer International Publishing, Pages: 691-711

LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost" objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003 s–0.021 s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work.

Conference paper

Co KT, Martinez-Rego D, Hau Z, Lupu ECet al., 2022, Jacobian ensembles improve robustness trade-offs to adversarial attacks, Artificial Neural Networks and Machine Learning - ICANN 2022, Publisher: Springer, Pages: 680-691, ISSN: 0302-9743

Deep neural networks have become an integral part of our software infrastructure and are being deployed in many widely-used and safety-critical applications. However, their integration into many systems also brings with it the vulnerability to test time attacks in the form of Universal Adversarial Perturbations (UAPs). UAPs are a class of perturbations that when applied to any input causes model misclassification. Although there is an ongoing effort to defend models against these adversarial attacks, it is often difficult to reconcile the trade-offs in model accuracy and robustness to adversarial attacks. Jacobian regularization has been shown to improve the robustness of models against UAPs, whilst model ensembles have been widely adopted to improve both predictive performance and model robustness. In this work, we propose a novel approach, Jacobian Ensembles – a combination of Jacobian regularization and model ensembles to significantly increase the robustness against UAPs whilst maintaining or improving model accuracy. Our results show that Jacobian Ensembles achieves previously unseen levels of accuracy and robustness, greatly improving over previous methods that tend to skew towards only either accuracy or robustness.

Conference paper

Hau Z, Demetriou S, Lupu EC, 2022, Using 3D shadows to detect object hiding attacks on autonomous vehicle perception, 43rd IEEE Symposium on Security and Privacy (SP), Publisher: IEEE, Pages: 229-235, ISSN: 2639-7862

Autonomous Vehicles (AVs) are mostly reliant on LiDAR sensors which enable spatial perception of their surroundings and help make driving decisions. Recent works demonstrated attacks that aim to hide objects from AV perception, which can result in severe consequences. 3D shadows, are regions void of measurements in 3D point clouds which arise from occlusions of objects in a scene. 3D shadows were proposed as a physical invariant valuable for detecting spoofed or fake objects. In this work, we leverage 3D shadows to locate obstacles that are hidden from object detectors. We achieve this by searching for void regions and locating the obstacles that cause these shadows. Our proposed methodology can be used to detect an object that has been hidden by an adversary as these objects, while hidden from 3D object detectors, still induce shadow artifacts in 3D point clouds, which we use for obstacle detection. We show that using 3D shadows for obstacle detection can achieve high accuracy in matching shadows to their object and provide precise prediction of an obstacle’s distance from the ego-vehicle.

Conference paper

Co KT, Martínez-Rego D, Hau Z, Lupu ECet al., 2022, Jacobian Ensembles Improve Robustness Trade-offs to Adversarial Attacks., CoRR, Vol: abs/2204.08726

Journal article

Co KT, Martínez-Rego D, Hau Z, Lupu ECet al., 2022, Jacobian Ensembles Improve Robustness Trade-Offs to Adversarial Attacks., Publisher: Springer, Pages: 680-691

Conference paper

Soikkeli J, Perner C, Lupu E, 2021, Analyzing the viability of UAV missions facing cyber attacks, 2021 IEEE European Symposium on Security and Privacy Workshops (EuroS&PW), Publisher: IEEE

With advanced video and sensing capabilities, un-occupied aerial vehicles (UAVs) are increasingly being usedfor numerous applications that involve the collaboration andautonomous operation of teams of UAVs. Yet such vehicle scan be affected by cyber attacks, impacting the viability of their missions. We propose a method to conduct mission via-bility analysis under cyber attacks for missions that employa team of several UAVs that share a communication network. We apply our method to a case study of a survey mission in a wildfire firefighting scenario. Within this context, we show how our method can help quantify the expected mission performance impact from an attack and determine if the mission can remain viable under various attack situations. Our method can be used both in the planning of the mission and for decision making during mission operation.Our approach to modeling attack progression and impact analysis with Petri nets is also more broadly applicable toother settings involving multiple resources that can be used interchangeably towards the same objective

Conference paper

Co KT, Rego DM, Lupu EC, 2021, Jacobian regularization for mitigating universal adversarial perturbations, Lecture Notes in Computer Science, Vol: 12894, Pages: 202-213, ISSN: 0302-9743

Universal Adversarial Perturbations (UAPs) are input perturbations that can fool a neural network on large sets of data. They are a class of attacks that represents a significant threat as they facilitate realistic, practical, and low-cost attacks on neural networks. In this work, we derive upper bounds for the effectiveness of UAPs based on norms of data-dependent Jacobians. We empirically verify that Jacobian regularization greatly increases model robustness to UAPs by up to four times whilst maintaining clean performance. Our theoretical analysis also allows us to formulate a metric for the strength of shared adversarial perturbations between pairs of inputs. We apply this metric to benchmark datasets and show that it is highly correlated with the actual observed robustness. This suggests that realistic and practical universal attacks can be reliably mitigated without sacrificing clean accuracy, which shows promise for the robustness of machine learning systems.

Journal article

Co KT, Muñoz-González L, Kanthan L, Glocker B, Lupu ECet al., 2021, Universal adversarial robustness of texture and shape-biased models, IEEE International Conference on Image Processing (ICIP)

Increasing shape-bias in deep neural networks has been shown to improverobustness to common corruptions and noise. In this paper we analyze theadversarial robustness of texture and shape-biased models to UniversalAdversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNNmodels with varying degrees of shape-based training. We find that shape-biasedmodels do not markedly improve adversarial robustness, and we show thatensembles of texture and shape-biased models can improve universal adversarialrobustness while maintaining strong performance.

Conference paper

Carnerero-Cano J, Muñoz-González L, Spencer P, Lupu ECet al., 2021, Regularization can help mitigate poisoning attacks... with the right hyperparameters, Publisher: arXiv

Machine learning algorithms are vulnerable to poisoning attacks, where afraction of the training data is manipulated to degrade the algorithms'performance. We show that current approaches, which typically assume thatregularization hyperparameters remain constant, lead to an overly pessimisticview of the algorithms' robustness and of the impact of regularization. Wepropose a novel optimal attack formulation that considers the effect of theattack on the hyperparameters, modelling the attack as a \emph{minimax bileveloptimization problem}. This allows to formulate optimal attacks, selecthyperparameters and evaluate robustness under worst case conditions. We applythis formulation to logistic regression using $L_2$ regularization, empiricallyshow the limitations of previous strategies and evidence the benefits of using$L_2$ regularization to dampen the effect of poisoning attacks.

Working paper

Co KT, Muñoz-González L, Kanthan L, Lupu ECet al., 2021, Real-time Detection of Practical Universal Adversarial Perturbations

Universal Adversarial Perturbations (UAPs) are a prominent class ofadversarial examples that exploit the systemic vulnerabilities and enablephysically realizable and robust attacks against Deep Neural Networks (DNNs).UAPs generalize across many different inputs; this leads to realistic andeffective attacks that can be applied at scale. In this paper we proposeHyperNeuron, an efficient and scalable algorithm that allows for the real-timedetection of UAPs by identifying suspicious neuron hyper-activations. Ourresults show the effectiveness of HyperNeuron on multiple tasks (imageclassification, object detection), against a wide variety of universal attacks,and in realistic scenarios, like perceptual ad-blocking and adversarialpatches. HyperNeuron is able to simultaneously detect both adversarial mask andpatch UAPs with comparable or better performance than existing UAP defenseswhilst introducing a significantly reduced latency of only 0.86 millisecondsper image. This suggests that many realistic and practical universal attackscan be reliably mitigated in real-time, which shows promise for the robustdeployment of machine learning systems.

Journal article

Chizari H, Lupu EC, 2021, Extracting randomness from the trend of IPI for cryptographic operators in implantable medical devices, IEEE Transactions on Dependable and Secure Computing, Vol: 18, Pages: 875-888, ISSN: 1545-5971

Achieving secure communication between an Implantable Medical Device (IMD) inside the body and a gateway outside the body has showed its criticality with recent reports of hackings such as in St. Jude Medical's Implantable Cardiac Devices, Johnson and Johnson insulin pumps and vulnerabilities in brain Neuro-implants. The use of asymmetric cryptography in particular is not a practical solution for IMDs due to the scarce computational and power resources, symmetric key cryptography is preferred. One of the factors in security of a symmetric cryptographic system is to use a strong key for encryption. A solution to develop such a strong key without using extensive resources in an IMD, is to extract it from the body physiological signals. In order to have a strong enough key, the physiological signal must be a strong source of randomness and InterPulse Interval (IPI) has been advised to be such that. A strong randomness source should have five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). Nevertheless, for current proposed random extraction methods from IPI these conditions (mainly last three conditions) were not examined. In this study, firstly, we proposed a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a huge dataset of IPI values (almost 900,000,000 IPIs), we showed that IPI does not have conditions of Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI value, mathematically, is impossible. Thirdly, rather than using the value of IPI, we proposed the trend of IPI as a source for a new randomness extraction method named as Martingale Randomness Extraction from IPI (MRE-IPI). We evaluat

Journal article

Hau Z, Co KT, Demetriou S, Lupu Eet al., 2021, Object removal attacks on LiDAR-based 3D object detectors, NDSS 2021 Workshop, Publisher: Internet Society

LiDARs play a critical role in Autonomous Vehicles' (AVs) perception and their safe operations. Recent works have demonstrated that it is possible to spoof LiDAR return signals to elicit fake objects. In this work we demonstrate how the same physical capabilities can be used to mount a new, even more dangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim to force 3D object detectors to fail. We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest (RoI) of 3D objects. By injecting illegitimate points behind the target object, we effectively shift points away from the target objects' RoIs. Our initial results using a simple random point selection strategy show that the attack is effective in degrading the performance of commonly used 3D object detection models.

Conference paper

Matachana A, Co KT, Munoz Gonzalez L, Martinez D, Lupu Eet al., 2021, Robustness and transferability of universal attacks on compressed models, AAAI 2021 Workshop, Publisher: AAAI

Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.

Conference paper

Hau Z, Co KT, Demetriou S, Lupu ECet al., 2021, Object Removal Attacks on LiDAR-based 3D Object Detectors

LiDARs play a critical role in Autonomous Vehicles' (AVs) perception andtheir safe operations. Recent works have demonstrated that it is possible tospoof LiDAR return signals to elicit fake objects. In this work we demonstratehow the same physical capabilities can be used to mount a new, even moredangerous class of attacks, namely Object Removal Attacks (ORAs). ORAs aim toforce 3D object detectors to fail. We leverage the default setting of LiDARsthat record a single return signal per direction to perturb point clouds in theregion of interest (RoI) of 3D objects. By injecting illegitimate points behindthe target object, we effectively shift points away from the target objects'RoIs. Our initial results using a simple random point selection strategy showthat the attack is effective in degrading the performance of commonly used 3Dobject detection models.

Conference paper

Co KT, Martínez-Rego D, Lupu EC, 2021, Jacobian Regularization for Mitigating Universal Adversarial Perturbations., Publisher: Springer, Pages: 202-213

Conference paper

Co KT, Muñoz-González L, Kanthan L, Glocker B, Lupu ECet al., 2021, Universal Adversarial Robustness of Texture and Shape-Biased Models., Publisher: IEEE, Pages: 799-803

Conference paper

Hau Z, Demetriou S, Muñoz-González L, Lupu ECet al., 2021, Shadow-Catcher: Looking into Shadows to Detect Ghost Objects in Autonomous Vehicle 3D Sensing., Publisher: Springer, Pages: 691-711

Conference paper

Matachana AG, Co KT, Muñoz-González L, Martinez D, Lupu ECet al., 2020, Robustness and transferability of universal attacks on compressed models, Publisher: arXiv

Neural network compression methods like pruning and quantization are veryeffective at efficiently deploying Deep Neural Networks (DNNs) on edge devices.However, DNNs remain vulnerable to adversarial examples-inconspicuous inputsthat are specifically designed to fool these models. In particular, UniversalAdversarial Perturbations (UAPs), are a powerful class of adversarial attackswhich create adversarial perturbations that can generalize across a large setof inputs. In this work, we analyze the effect of various compressiontechniques to UAP attacks, including different forms of pruning andquantization. We test the robustness of compressed models to white-box andtransfer attacks, comparing them with their uncompressed counterparts onCIFAR-10 and SVHN datasets. Our evaluations reveal clear differences betweenpruning methods, including Soft Filter and Post-training Pruning. We observethat UAP transfer attacks between pruned and full models are limited,suggesting that the systemic vulnerabilities across these models are different.This finding has practical implications as using different compressiontechniques can blunt the effectiveness of black-box transfer attacks. We showthat, in some scenarios, quantization can produce gradient-masking, giving afalse sense of security. Finally, our results suggest that conclusions aboutthe robustness of compressed models to UAP attacks is application dependent,observing different phenomena in the two datasets used in our experiments.

Working paper

Castiglione LM, Lupu EC, 2020, Hazard driven threat modelling for cyber physical systems, 2020 Joint Workshop on CPS&IoT Security and Privacy (CPSIOTSEC’20), Publisher: ACM, Pages: 13-24

Adversarial actors have shown their ability to infiltrate enterprise networks deployed around Cyber Physical Systems (CPSs) through social engineering, credential stealing and file-less infections. When inside, they can gain enough privileges to maliciously call legitimate APIs and apply unsafe control actions to degrade the system performance and undermine its safety. Our work lies at the intersection of security and safety, and aims to understand dependencies among security, reliability and safety in CPS/IoT. We present a methodology to perform hazard driven threat modelling and impact assessment in the context of CPSs. The process starts from the analysis of behavioural, functional and architectural models of the CPS. We then apply System Theoretic Process Analysis (STPA) on the functional model to highlight high-level abuse cases. We lever-age a mapping between the architectural and the system theoretic(ST) models to enumerate those components whose impairment provides the attacker with enough privileges to tamper with or disrupt the data-flows. This enables us to find a causal connection between the attack surface (in the architectural model) and system level losses. We then link the behavioural and system theoretic representations of the CPS to quantify the impact of the attack. Using our methodology it is possible to compute a comprehensive attack graph of the known attack paths and to perform both a qualitative and quantitative impact assessment of the exploitation of vulnerabilities affecting target nodes. The framework and methodology are illustrated using a small scale example featuring a Communication Based Train Control (CBTC) system. Aspects regarding the scalability of our methodology and its application in real world scenarios are also considered. Finally, we discuss the possibility of using the results obtained to engineer both design time and real time defensive mechanisms.

Conference paper

Karafili E, Wang L, Lupu E, 2020, An argumentation-based reasoner to assist digital investigation and attribution of cyber-attacks, DFRWS EU, Publisher: Elsevier, Pages: 1-9, ISSN: 2666-2817

We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security coun- termeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who per- formed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions.

Conference paper

Karafili E, Wang L, Lupu EC, 2020, An Argumentation-Based Reasoner to Assist Digital Investigation and Attribution of Cyber-Attacks., Digit. Investig., Vol: 32 Supplement, Pages: 300925-300925

Journal article

Carnerero-Cano J, Mu noz-González L, Spencer P, Lupu ECet al., 2020, Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Other

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00155030&limit=30&person=true