Publications
40 results found
Wang H, Munoz-Gonzalez L, Hameed MZ, et al., 2023, SparSFA: Towards robust and communication-efficient peer-to-peer federated learning, COMPUTERS & SECURITY, Vol: 129, ISSN: 0167-4048
Carnerero Cano J, Munoz Gonzalez L, Spencer P, et al., 2023, Hyperparameter Learning under Data Poisoning: Analysis of the Influence of Regularization via Multiobjective Bilevel Optimization, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227
Carnerero Cano J, Munoz Gonzalez L, Spencer P, et al., 2023, Hyperparameter learning under data poisoning: analysis of the influence of regularization via multiobjective bilevel optimization, IEEE Transactions on Neural Networks and Learning Systems, ISSN: 1045-9227
Machine Learning (ML) algorithms are vulnerable to poisoning attacks, where a fraction of the training data is manipulated to deliberately degrade the algorithms' performance. Optimal attacks can be formulated as bilevel optimization problems and help to assess their robustness in worst-case scenarios. We show that current approaches, which typically assume that hyperparameters remain constant, lead to an overly pessimistic view of the algorithms' robustness and of the impact of regularization. We propose a novel optimal attack formulation that considers the effect of the attack on the hyperparameters and models the attack as a multiobjective bilevel optimization problem. This allows to formulate optimal attacks, learn hyperparameters and evaluate robustness under worst-case conditions. We apply this attack formulation to several ML classifiers using L₂ and L₁ regularization. Our evaluation on multiple datasets shows that choosing an "a priori" constant value for the regularization hyperparameter can be detrimental to the performance of the algorithms. This confirms the limitations of previous strategies and evidences the benefits of using L₂ and L₁ regularization to dampen the effect of poisoning attacks, when hyperparameters are learned using a small trusted dataset. Additionally, our results show that the use of regularization plays an important robustness and stability role in complex models, such as Deep Neural Networks, where the attacker can have more flexibility to manipulate the decision boundary.
Soikkeli J, Casale G, Munoz Gonzalez L, et al., 2023, Redundancy planning for cost efficient resilience to cyber attacks, IEEE Transactions on Dependable and Secure Computing, Vol: 20, Pages: 1154-1168, ISSN: 1545-5971
We investigate the extent to which redundancy (including with diversity) can help mitigate the impact of cyber attacks that aim to reduce system performance. Using analytical techniques, we estimate impacts, in terms of monetary costs, of penalties from breaching Service Level Agreements (SLAs), and find optimal resource allocations to minimize the overall costs arising from attacks. Our approach combines attack impact analysis, based on performance modeling using queueing networks, with an attack model based on attack graphs. We evaluate our approach using a case study of a website, and show how resource redundancy and diversity can improve the resilience of a system by reducing the likelihood of a fully disruptive attack. We find that the cost-effectiveness of redundancy depends on the SLA terms, the probability of attack detection, the time to recover, and the cost of maintenance. In our case study, redundancy with diversity achieved a saving of up to around 50 percent in expected attack costs relative to no redundancy. The overall benefit over time depends on how the saving during attacks compares to the added maintenance costs due to redundancy.
Castiglione L, Hau Z, Ge P, et al., 2022, HA-grid: security aware hazard analysis for smart grids, IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids, Publisher: IEEE, Pages: 446-452
Attacks targeting smart grid infrastructures can result in the disruptions of power supply as well as damages to costly equipment, with significant impact on safety as well as on end-consumers. It is therefore of essence to identify attack paths in the infrastructure that lead to safety violations and todetermine critical components that must be protected. In this paper, we introduce a methodology (HA-Grid) that incorporates both safety and security modelling of smart grid infrastructure to analyse the impact of cyber threats on the safety of smart grid infrastructures. HA-Grid is applied on a smart grid test-bed to identify attack paths that lead to safety hazards, and todetermine the common nodes in these attack paths as critical components that must be protected.
Hau Z, Demetriou S, Muñoz-González L, et al., 2022, Shadow-catcher: looking into shadows to detect ghost objects in autonomous vehicle 3D sensing, ESORICS, Publisher: Springer International Publishing, Pages: 691-711
LiDAR-driven 3D sensing allows new generations of vehicles to achieve advanced levels of situation awareness. However, recent works have demonstrated that physical adversaries can spoof LiDAR return signals and deceive 3D object detectors to erroneously detect “ghost" objects. Existing defenses are either impractical or focus only on vehicles. Unfortunately, it is easier to spoof smaller objects such as pedestrians and cyclists, but harder to defend against and can have worse safety implications. To address this gap, we introduce Shadow-Catcher, a set of new techniques embodied in an end-to-end prototype to detect both large and small ghost object attacks on 3D detectors. We characterize a new semantically meaningful physical invariant (3D shadows) which Shadow-Catcher leverages for validating objects. Our evaluation on the KITTI dataset shows that Shadow-Catcher consistently achieves more than 94% accuracy in identifying anomalous shadows for vehicles, pedestrians, and cyclists, while it remains robust to a novel class of strong “invalidation” attacks targeting the defense system. Shadow-Catcher can achieve real-time detection, requiring only between 0.003 s–0.021 s on average to process an object in a 3D point cloud on commodity hardware and achieves a 2.17x speedup compared to prior work.
Sturluson SP, Trew S, Muñoz-González L, et al., 2021, FedRAD: Federated Robust Adaptive Distillation
The robustness of federated learning (FL) is vital for the distributedtraining of an accurate global model that is shared among large number ofclients. The collaborative learning framework by typically aggregating modelupdates is vulnerable to model poisoning attacks from adversarial clients.Since the shared information between the global server and participants areonly limited to model parameters, it is challenging to detect bad modelupdates. Moreover, real-world datasets are usually heterogeneous and notindependent and identically distributed (Non-IID) among participants, whichmakes the design of such robust FL pipeline more difficult. In this work, wepropose a novel robust aggregation method, Federated Robust AdaptiveDistillation (FedRAD), to detect adversaries and robustly aggregate localmodels based on properties of the median statistic, and then performing anadapted version of ensemble Knowledge Distillation. We run extensiveexperiments to evaluate the proposed method against recently published works.The results show that FedRAD outperforms all other aggregators in the presenceof adversaries, as well as in heterogeneous data distributions.
Co KT, Muñoz-González L, Kanthan L, et al., 2021, Universal adversarial robustness of texture and shape-biased models, IEEE International Conference on Image Processing (ICIP)
Increasing shape-bias in deep neural networks has been shown to improverobustness to common corruptions and noise. In this paper we analyze theadversarial robustness of texture and shape-biased models to UniversalAdversarial Perturbations (UAPs). We use UAPs to evaluate the robustness of DNNmodels with varying degrees of shape-based training. We find that shape-biasedmodels do not markedly improve adversarial robustness, and we show thatensembles of texture and shape-biased models can improve universal adversarialrobustness while maintaining strong performance.
Wang H, Muñoz-González L, Eklund D, et al., 2021, Non-IID data re-balancing at IoT edge with peer-to-peer federated learning for anomaly detection, WiSec '21: 14th ACM Conference on Security and Privacy in Wireless and Mobile Networks, Publisher: ACM, Pages: 153-163
The increase of the computational power in edge devices has enabled the penetration of distributed machine learning technologies such as federated learning, which allows to build collaborative models performing the training locally in the edge devices, improving the efficiency and the privacy for training of machine learning models, as the data remains in the edge devices. However, in some IoT networks the connectivity between devices and system components can be limited, which prevents the use of federated learning, as it requires a central node to orchestrate the training of the model. To sidestep this, peer-to-peer learning appears as a promising solution, as it does not require such an orchestrator. On the other side, the security challenges in IoT deployments have fostered the use of machine learning for attack and anomaly detection. In these problems, under supervised learning approaches, the training datasets are typically imbalanced, i.e. the number of anomalies is very small compared to the number of benign data points, which requires the use of re-balancing techniques to improve the algorithms' performance. In this paper, we propose a novel peer-to-peer algorithm,P2PK-SMOTE, to train supervised anomaly detection machine learning models in non-IID scenarios, including mechanisms to locally re-balance the training datasets via synthetic generation of data points from the minority class. To improve the performance in non-IID scenarios, we also include a mechanism for sharing a small fraction of synthetic data from the minority class across devices, aiming to reduce the risk of data de-identification. Our experimental evaluation in real datasets for IoT anomaly detection across a different set of scenarios validates the benefits of our proposed approach.
Carnerero-Cano J, Muñoz-González L, Spencer P, et al., 2021, Regularization can help mitigate poisoning attacks... with the right hyperparameters, Publisher: arXiv
Machine learning algorithms are vulnerable to poisoning attacks, where afraction of the training data is manipulated to degrade the algorithms'performance. We show that current approaches, which typically assume thatregularization hyperparameters remain constant, lead to an overly pessimisticview of the algorithms' robustness and of the impact of regularization. Wepropose a novel optimal attack formulation that considers the effect of theattack on the hyperparameters, modelling the attack as a \emph{minimax bileveloptimization problem}. This allows to formulate optimal attacks, selecthyperparameters and evaluate robustness under worst case conditions. We applythis formulation to logistic regression using $L_2$ regularization, empiricallyshow the limitations of previous strategies and evidence the benefits of using$L_2$ regularization to dampen the effect of poisoning attacks.
Co KT, Muñoz-González L, Kanthan L, et al., 2021, Real-time Detection of Practical Universal Adversarial Perturbations
Universal Adversarial Perturbations (UAPs) are a prominent class ofadversarial examples that exploit the systemic vulnerabilities and enablephysically realizable and robust attacks against Deep Neural Networks (DNNs).UAPs generalize across many different inputs; this leads to realistic andeffective attacks that can be applied at scale. In this paper we proposeHyperNeuron, an efficient and scalable algorithm that allows for the real-timedetection of UAPs by identifying suspicious neuron hyper-activations. Ourresults show the effectiveness of HyperNeuron on multiple tasks (imageclassification, object detection), against a wide variety of universal attacks,and in realistic scenarios, like perceptual ad-blocking and adversarialpatches. HyperNeuron is able to simultaneously detect both adversarial mask andpatch UAPs with comparable or better performance than existing UAP defenseswhilst introducing a significantly reduced latency of only 0.86 millisecondsper image. This suggests that many realistic and practical universal attackscan be reliably mitigated in real-time, which shows promise for the robustdeployment of machine learning systems.
Labaca-Castro R, Muñoz-González L, Pendlebury F, et al., 2021, Realizable Universal Adversarial Perturbations for Malware
Machine learning classifiers are vulnerable to adversarial examples --input-specific perturbations that manipulate models' output. UniversalAdversarial Perturbations (UAPs), which identify noisy patterns that generalizeacross the input space, allow the attacker to greatly scale up the generationof such examples. Although UAPs have been explored in application domainsbeyond computer vision, little is known about their properties and implicationsin the specific context of realizable attacks, such as malware, where attackersmust satisfy challenging problem-space constraints. In this paper we explore the challenges and strengths of UAPs in the contextof malware classification. We generate sequences of problem-spacetransformations that induce UAPs in the corresponding feature-space embeddingand evaluate their effectiveness across different malware domains.Additionally, we propose adversarial training-based mitigations using knowledgederived from the problem-space transformations, and compare against alternativefeature-space defenses. Our experiments limit the effectiveness of a white box Android evasion attackto ~20% at the cost of ~3% TPR at 1% FPR. We additionally show how our methodcan be adapted to more restrictive domains such as Windows malware. We observe that while adversarial training in the feature space must dealwith large and often unconstrained regions, UAPs in the problem space identifyspecific vulnerabilities that allow us to harden a classifier more effectively,shifting the challenges and associated cost of identifying new universaladversarial transformations back to the attacker.
Matachana A, Co KT, Munoz Gonzalez L, et al., 2021, Robustness and transferability of universal attacks on compressed models, AAAI 2021 Workshop, Publisher: AAAI
Neural network compression methods like pruning and quantization are very effective at efficiently deploying Deep Neural Networks (DNNs) on edge devices. However, DNNs remain vulnerable to adversarial examples-inconspicuous inputs that are specifically designed to fool these models. In particular, Universal Adversarial Perturbations (UAPs), are a powerful class of adversarial attacks which create adversarial perturbations that can generalize across a large set of inputs. In this work, we analyze the effect of various compression techniques to UAP attacks, including different forms of pruning and quantization. We test the robustness of compressed models to white-box and transfer attacks, comparing them with their uncompressed counterparts on CIFAR-10 and SVHN datasets. Our evaluations reveal clear differences between pruning methods, including Soft Filter and Post-training Pruning. We observe that UAP transfer attacks between pruned and full models are limited, suggesting that the systemic vulnerabilities across these models are different. This finding has practical implications as using different compression techniques can blunt the effectiveness of black-box transfer attacks. We show that, in some scenarios, quantization can produce gradient-masking, giving a false sense of security. Finally, our results suggest that conclusions about the robustness of compressed models to UAP attacks is application dependent, observing different phenomena in the two datasets used in our experiments.
Matachana AG, Co KT, Muñoz-González L, et al., 2020, Robustness and transferability of universal attacks on compressed models, Publisher: arXiv
Neural network compression methods like pruning and quantization are veryeffective at efficiently deploying Deep Neural Networks (DNNs) on edge devices.However, DNNs remain vulnerable to adversarial examples-inconspicuous inputsthat are specifically designed to fool these models. In particular, UniversalAdversarial Perturbations (UAPs), are a powerful class of adversarial attackswhich create adversarial perturbations that can generalize across a large setof inputs. In this work, we analyze the effect of various compressiontechniques to UAP attacks, including different forms of pruning andquantization. We test the robustness of compressed models to white-box andtransfer attacks, comparing them with their uncompressed counterparts onCIFAR-10 and SVHN datasets. Our evaluations reveal clear differences betweenpruning methods, including Soft Filter and Post-training Pruning. We observethat UAP transfer attacks between pruned and full models are limited,suggesting that the systemic vulnerabilities across these models are different.This finding has practical implications as using different compressiontechniques can blunt the effectiveness of black-box transfer attacks. We showthat, in some scenarios, quantization can produce gradient-masking, giving afalse sense of security. Finally, our results suggest that conclusions aboutthe robustness of compressed models to UAP attacks is application dependent,observing different phenomena in the two datasets used in our experiments.
Grama M, Musat M, Muñoz-González L, et al., 2020, Robust aggregation for adaptive privacy preserving federated learning in healthcare, Publisher: arXiv
Federated learning (FL) has enabled training models collaboratively frommultiple data owning parties without sharing their data. Given the privacyregulations of patient's healthcare data, learning-based systems in healthcarecan greatly benefit from privacy-preserving FL approaches. However, typicalmodel aggregation methods in FL are sensitive to local model updates, which maylead to failure in learning a robust and accurate global model. In this work,we implement and evaluate different robust aggregation methods in FL applied tohealthcare data. Furthermore, we show that such methods can detect and discardfaulty or malicious local clients during training. We run two sets ofexperiments using two real-world healthcare datasets for training medicaldiagnosis classification tasks. Each dataset is used to simulate theperformance of three different robust FL aggregation strategies when facingdifferent poisoning attacks. The results show that privacy preserving methodscan be successfully applied alongside Byzantine-robust aggregation techniques.We observed in particular how using differential privacy (DP) did notsignificantly impact the final learning convergence of the differentaggregation strategies.
Carnerero-Cano J, Mu noz-González L, Spencer P, et al., 2020, Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation
Co KT, Munoz Gonzalez L, de Maupeou S, et al., 2019, Procedural noise adversarial examples for black-box attacks on deep neural networks, 26th ACM Conference on Computer and Communications Security, Publisher: ACM, Pages: 275-289
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.
Muñoz-González L, Lupu EC, 2019, The security of machine learning systems, AI in Cybersecurity, Publisher: Springer, Pages: 47-79
© Springer Nature Switzerland AG 2019. Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.
Muñoz-González L, Co KT, Lupu EC, 2019, Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging
Federated learning enables training collaborative machine learning models atscale with many participants whilst preserving the privacy of their datasets.Standard federated learning techniques are vulnerable to Byzantine failures,biased local datasets, and poisoning attacks. In this paper we introduceAdaptive Federated Averaging, a novel algorithm for robust federated learningthat is designed to detect failures, attacks, and bad updates provided byparticipants in a collaborative model. We propose a Hidden Markov Model tomodel and learn the quality of model updates provided by each participantduring training. In contrast to existing robust federated learning schemes, wepropose a robust aggregation rule that detects and discards bad or maliciouslocal model updates at each training iteration. This includes a mechanism thatblocks unwanted participants, which also increases the computational andcommunication efficiency. Our experimental evaluation on 4 real datasets showthat our algorithm is significantly more robust to faulty, noisy and maliciousparticipants, whilst being computationally more efficient than otherstate-of-the-art robust federated learning methods such as Multi-KRUM andcoordinate-wise median.
Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient attack countermeasure selection accounting for recovery and action costs, the 14th International Conference, Publisher: ACM Press
The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender.We consider a networked system providing services whose functionality depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defense, it may be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and performs well in comparison to alternatives.
Co KT, Muñoz-González L, Lupu EC, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise
Deep Convolutional Networks (DCNs) have been shown to be sensitive toUniversal Adversarial Perturbations (UAPs): input-agnostic perturbations thatfool a model on large portions of a dataset. These UAPs exhibit interestingvisual patterns, but this phenomena is, as yet, poorly understood. Our workshows that visually similar procedural noise patterns also act as UAPs. Inparticular, we demonstrate that different DCN architectures are sensitive toGabor noise patterns. This behaviour, its causes, and implications deservefurther in-depth study.
Co KT, Munoz Gonzalez L, Lupu E, 2019, Sensitivity of Deep Convolutional Networks to Gabor Noise, ICML 2019 Workshop
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.
Collinge G, Lupu E, Munoz Gonzalez L, 2019, Defending against Poisoning Attacks in Online Learning Settings, European Symposium on Artificial Neural Networks, Publisher: ESANN
Machine learning systems are vulnerable to data poisoning, acoordinated attack where a fraction of the training dataset is manipulatedby an attacker to subvert learning. In this paper we first formulate an optimal attack strategy against online learning classifiers to assess worst-casescenarios. We also propose two defence mechanisms to mitigate the effectof online poisoning attacks by analysing the impact of the data points inthe classifier and by means of an adaptive combination of machine learning classifiers with different learning rates. Our experimental evaluationsupports the usefulness of our proposed defences to mitigate the effect ofpoisoning attacks in online learning settings.
Munoz Gonzalez L, Sgandurra D, Barrere Cambrun M, et al., 2019, Exact Inference Techniques for the Analysis of Bayesian Attack Graphs, IEEE Transactions on Dependable and Secure Computing, Vol: 16, Pages: 231-244, ISSN: 1941-0018
Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attacker's behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.
Paudice A, Muñoz-González L, Lupu EC, 2019, Label sanitization against label flipping poisoning attacks, Nemesis'18. Workshop in Recent Advances in Adversarial Machine Learning, Publisher: Springer Verlag, Pages: 5-15, ISSN: 0302-9743
Many machine learning systems rely on data collected in thewild from untrusted sources, exposing the learning algorithms to datapoisoning. Attackers can inject malicious data in the training datasetto subvert the learning process, compromising the performance of thealgorithm producing errors in a targeted or an indiscriminate way. Labelflipping attacks are a special case of data poisoning, where the attackercan control the labels assigned to a fraction of the training points. Evenif the capabilities of the attacker are constrained, these attacks havebeen shown to be effective to significantly degrade the performance ofthe system. In this paper we propose an efficient algorithm to performoptimal label flipping poisoning attacks and a mechanism to detect andrelabel suspicious data points, mitigating the effect of such poisoningattacks.
Mu noz-González L, Carnerero-Cano J, Co KT, et al., 2019, Challenges and Advances in Adversarial Machine Learning, Resilience and Hybrid Threats: Security and Integrity for the Digital World, Publisher: IOS Press, Pages: 102-102
Munoz Gonzalez L, Lupu E, 2019, The Security of Machine Learning Systems, AI in Cybersecurity, Editors: Sikos
Mu noz-González L, Pfitzner B, Russo M, et al., 2019, Poisoning Attacks with Generative Adversarial Nets
Kott A, Blakely B, Henshel D, et al., 2018, Approaches to Enhancing Cyber Resilience: Report of the North Atlantic Treaty Organization (NATO) Workshop IST-153, Approaches to Enhancing Cyber Resilience: Report of the North Atlantic Treaty Organization (NATO) Workshop IST-153, AD1050894
This report summarizes the discussions and findings of the 2017 NorthAtlantic Treaty Organization (NATO) Workshop, IST-153, on Cyber Resilience,held in Munich, Germany, on 23-25 October 2017, at the University ofBundeswehr. Despite continual progress in managing risks in the cyber domain,anticipation and prevention of all possible attacks and malfunctions are notfeasible for the current or future systems comprising the cyber infrastructure.Therefore, interest in cyber resilience (as opposed to merely risk-basedapproaches) is increasing rapidly, in literature and in practice. Unlikeconcepts of risk or robustness - which are often and incorrectly conflated withresilience - resiliency refers to the system's ability to recover or regenerateits performance to a sufficient level after an unexpected impact produces adegradation of its performance. The exact relation among resilience, risk, androbustness has not been well articulated technically. The presentations anddiscussions at the workshop yielded this report. It focuses on the followingtopics that the participants of the workshop saw as particularly important:fundamental properties of cyber resilience; approaches to measuring andmodeling cyber resilience; mission modeling for cyber resilience; systemsengineering for cyber resilience, and dynamic defense as a path toward cyberresilience.
Munoz Gonzalez L, Lupu E, 2018, The secret of machine learning, ITNOW, Vol: 60, Pages: 38-39, ISSN: 1746-5702
Luis Muñoz-González and Emil C. Lupu, from Imperial College London, explore the vulnerabilities of machine learning algorithms.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.