232 results found
Barrere Cambrun M, Vieira Steiner R, Mohsen R, et al., Tracking the Bad Guys: An Efficient Forensic Methodology To Trace Multi-step Attacks Using Core Attack Graphs, 13th International Conference on Network and Service Management (CNSM'17)
In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.
Munoz Gonzalez L, Lupu E, Bayesian Attack Graphs for Security Risk Assessment, IST-153 NATO Workshop on Cyber Resilience
Muñoz-González L, Sgandurra D, Paudice A, et al., 2017, Efficient Attack Graph Analysis through Approximate Inference, ACM Transactions on Privacy and Security, Vol: 20, ISSN: 2471-2566
Attack graphs provide compact representations of the attack paths an attacker can follow to compromise network resources from the analysis of network vulnerabilities and topology. These representations are a powerful tool for security risk assessment. Bayesian inference on attack graphs enables the estimation of the risk of compromise to the system's components given their vulnerabilities and interconnections, and accounts for multi-step attacks spreading through the system. Whilst static analysis considers the risk posture at rest, dynamic analysis also accounts for evidence of compromise, e.g. from SIEM software or forensic investigation. However, in this context, exact Bayesian inference techniques do not scale well. In this paper we show how Loopy Belief Propagation - an approximate inference technique - can be applied to attack graphs, and that it scales linearly in the number of nodes for both static and dynamic analysis, making such analyses viable for larger networks. We experiment with different topologies and network clustering on synthetic Bayesian attack graphs with thousands of nodes to show that the algorithm's accuracy is acceptable and that it converges to a stable solution. We compare sequential and parallel versions of Loopy Belief Propagation with exact inference techniques for both static and dynamic analysis, showing the advantages and gains of approximate inference techniques when scaling to larger attack graphs.
Illiano V, Steiner RV, Lupu EC, 2017, Unity is strength! Combining attestation and measurements inspection to handle malicious data injections in WSNs, Conference on Security and Privacy in Wireless and Mobile Networks (WiSec) 2017, Publisher: ACM, Pages: 134-144
Aestation and measurements inspection are dierent but com-plementary approaches towards the same goal: ascertaining theintegrity of sensor nodes in wireless sensor networks. In this paperwe compare the benets and drawbacks of both techniques and seekto determine how to best combine them. However, our study showsthat no single solution exists, as each choice introduces changesin the measurements collection process, aects the aestation pro-tocol, and gives a dierent balance between the high detectionrate of aestation and the low power overhead of measurementsinspection. erefore, we propose three strategies that combinemeasurements inspection and aestation in dierent ways, and away to choose between them based on the requirements of dierentapplications. We analyse their performance both analytically andin a simulator. e results show that the combined strategies canachieve a detection rate close to aestation, in the range 96-99%,whilst keeping a power overhead close to measurements inspection,in the range 1-10%.
Cullen A, Williams B, Bertino E, et al., 2017, Mission support for drones: a policy based approach, International Workshop on Micro Aerial Vehicle Networks, Systems, and Applications (DRONET 17), Publisher: ACM, Pages: 7-12
We examine the impact of increasing autonomy on the use of airborne drones in joint operations by collaborative parties. As the degree of automation employed increases towards the level implied by the term ‘autonomous’, it becomes apparent that existing control mechanisms are insufficiently flexible. Using an architecture introduced by Bertino et al. in  and Verma et al. in , we consider the use of dynamic policy modification as a means to adjust to rapidly evolving scenarios. We show mechanisms which allow this approach to improve the effectiveness of operations without compromise to security or safety.
Karafili E, Lupu E, 2017, Enabling Data Sharing in Contextual Environments: Policy Representation and Analysis, ACM Symposium on Access Control Models and Technologies (SACMAT), Publisher: ACM, Pages: 231-238
Internet of Things environments enable us to capture more and more data about the physical environment we live in and about ourselves. The data enable us to optimise resources, personalise services and offer unprecedented insights into our lives. However, to achieve these insights data need to be shared (and sometimes sold) between organisations imposing rights and obligations upon the sharing parties and in accordance with multiple layers of sometimes conflicting legislation at international, national and organisational levels. In this work, we show how such rules can be captured in a formal representation called ``Data Sharing Agreements''. We introduce the use of abductive reasoning and argumentation based techniques to work with context dependent rules, detect inconsistencies between them, and resolve the inconsistencies by assigning priorities to the rules. We show how through the use of argumentation based techniques use-cases taken from real life application are handled flexibly addressing trade-offs between confidentiality, privacy, availability and safety.
Felmlee D, Lupu E, McMillan C, et al., Decision-making in policy governed human-autonomous systems teams, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE
Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike.In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.
Karafili E, Lupu E, Arunkumar S, et al., Argumentation-based policy analysis for drone systems, Dais Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE
The use of drone systems is increasing especially in dangerous environments where manned operations are too risky. Different entities are involved in drone systems’ missions and they come along with their vast varieties of specifications. The behaviour of the system is described by its set of policies that should satisfy the requirements and specifications of the different entities and the system itself. Deciding the policies that describe the actions to be taken is not trivial, as the different requirements and specifications can lead to conflicting actions. We introduce an argumentation-based policy analysis that captures conflicts for which properties have been specified. Our solution allows different rules to take priority in different contexts. We propose a decision making process that solves the detected conflicts by using a dynamic conflict resolution based on the priorities between rules. We apply our solution to two case studies where drone systems are used for military and disaster rescue operations.
Karafili E, Pipes S, Lupu E, Verification techniques for policy based systems, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE
Verification techniques are applied to policy based systems to ensure design correctness and to aid in the discovery of errors at an early stage of the development life cycle. A primary goal of policy verification is to evaluate the policy’s validity. Other analyses on policy based systems include the identification of conflicting policies and policy efficiency evalu- ation and improvement. In this work, we present a discussion and classification of recent research on verification techniques for policy based systems. We analyse several techniques and identify popular supporting verification tools. An evaluation of the benefits and drawbacks of the existing policy analyses is made. Some of the common identified problems were the significant need of computational power, the limitation of the techniques to particular policy model, which restrict their ex- tension to other policy models and the lack of efficient conflicts resolution methods. We use the evaluation results for discussing the further challenges and future research directions that will be faced by policy verification techniques. In particular, we discuss specific requirements concerning verification techniques for coalition policies systems and autonomous decision making.
Munoz Gonzalez L, Sgandurra D, Barrere Cambrun M, et al., 2017, Exact Inference Techniques for the Analysis of Bayesian Attack Graphs, IEEE Transactions on Dependable and Secure Computing, ISSN: 1941-0018
Karafili E, Kakas A, Spanoudakis N, et al., Argumentation-based security for social good, AAAI Spring Symposium 2017, AI for the Social Good, Publisher: AAAI
The increase of connectivity and the impact it has in every day life is raising new and existing security problems that are becoming important for social good. We introduce two particular problems: cyber attack attribution and regulatory data sharing. For both problems, decisions about which rules to apply, should be taken under incomplete and context dependent information. The solution we propose is based on argumentation reasoning, that isa well suited technique for implementing decision making mechanisms under conflicting and incomplete information. Our proposal permits us to identify the attacker of a cyber attack and decide the regulation rule that should be used while using and sharing data. We illustrate our solution through concrete examples.
Illiano V, Muñoz-Gonzàlez L, Lupu E, 2016, Don't fool me!: Detection, Characterisation and Diagnosis of Spoofed and Masked Events in Wireless Sensor Networks, IEEE Transactions on Dependable and Secure Computing, Vol: 14, Pages: 279-293, ISSN: 1545-5971
Wireless Sensor Networks carry a high risk of being compromised, as their deployments are often unattended, physicallyaccessible and the wireless medium is difficult to secure. Malicious data injections take place when the sensed measurements aremaliciously altered to trigger wrong and potentially dangerous responses. When many sensors are compromised, they can collude witheach other to alter the measurements making such changes difficult to detect. Distinguishing between genuine and maliciousmeasurements is even more difficult when significant variations may be introduced because of events, especially if more events occursimultaneously. We propose a novel methodology based on wavelet transform to detect malicious data injections, to characterise theresponsible sensors, and to distinguish malicious interference from faulty behaviours. The results, both with simulated and realmeasurements, show that our approach is able to counteract sophisticated attacks, achieving a significant improvement overstate-of-the-art approaches.
Vieira Steiner R, Lupu EC, 2016, Attestation in Wireless Sensor Networks: a Survey, ACM Computing Surveys, Vol: 49, ISSN: 1557-7341
Attestation is a mechanism used by a trusted entity to validate the software integrity of an untrusted platform. Over the past few years, several attestation techniques have been proposed. While they all use variants of a challenge-response protocol, they make different assumptions about what an attacker can and cannot do. Thus, they propose intrinsically divergent validation approaches. We survey in this article the different approaches to attestation, focusing in particular on those aimed at Wireless Sensor Networks. We discuss the motivations, challenges, assumptions, and attacks of each approach. We then organise them into a taxonomy and discuss the state of the art, carefully analysing the advantages and disadvantages of each proposal. We also point towards the open research problems and give directions on how to address them.
Spanaki K, Adams R, Mulligan C, et al., 2016, A Research Agenda on Data Supply Chains (DSC), British Academy of Management (BAM) Conference
Competition among organizations supports initiatives and collaborative use of data whilecreating value based on the strategy and best performance of each data supply chain.Supporting this direction, and building on the theoretical background of the supply chain, wepropose the Data Supply Chain (DSC) as a novel concept to aid investigations for data-drivencollaboration impacting organizational performance. In this study we initially propose adefinition for the DSC paying particular attention to the need for collaboration for the supplychains of data. Furthermore, we develop a conceptual model of DSC collaboration couplingtheoretical background of strategy and operations literature including, the resource-basedview (RBV), supply chain management (SCM) and collaboration (SCC). Finally, we setpropositions and a future research agenda including testing and validating the model fit.
Sgandurra D, Karafili E, Lupu EC, 2016, Formalizing Threat Models for Virtualized Systems, Data and Applications Security and Privacy (DBSec 2016), Publisher: Springer International Publishing, Pages: 251-267, ISSN: 0302-9743
We propose a framework, called FATHoM (FormAlizing THreat Models), to define threat models for virtualized systems. For each component of a virtualized system, we specify a set of security properties that defines its control responsibility, its vulnerability and protection states. Relations are used to represent how assumptions made about a component’s security state restrict the assumptions that can be made on the other components. FATHoM includes a set of rules to compute the derived security states from the assumptions and the components’ relations. A further set of relations and rules is used to define how to protect the derived vulnerable components. The resulting system is then analysed, among others, for consistency of the threat model. We have developed a tool that implements FATHoM, and have validated it with use-cases adapted from the literature.
Spanaki K, Adams R, Mulligan C, et al., 2016, Data Supply Chain (DSC): development and validation of a measurement instrument, 23rd EurOMA Conference
The volume and availability of data produced and affordably stored has become animportant new resource for building organizational competitive advantage. Reflectingthis, and expanding the concept of the supply chain, we propose the Data Supply Chain(DSC) as a novel concept to aid investigations into how the interconnected datacharacteristics relate to and impact organizational performance. Initially, we define theconcept and develop a research agenda on DSC coupling theoretical background ofstrategy and operations literature. Along with the conceptualization, we develop a set ofpropositions and make suggestions for future research including testing and validatingthe model fit.
Sgandurra D, Lupu E, 2016, Evolution of attacks, threat models, and solutions for virtualized systems, ACM Computing Surveys, Vol: 48, ISSN: 1557-7341
Virtualization technology enables Cloud providers to efficiently use their computing services and resources. Even if the benefits in terms of performance, maintenance, and cost are evident, however, virtualization has also been exploited by attackers to devise new ways to compromise a system. To address these problems, research security solutions have evolved considerably over the years to cope with new attacks and threat models. In this work, we review the protection strategies proposed in the literature and show how some of the solutions have been invalidated by new attacks, or threat models, that were previously not considered. The goal is to show the evolution of the threats, and of the related security and trust assumptions, in virtualized systems that have given rise to complex threat models and the corresponding sophistication of protection strategies to deal with such attacks. We also categorize threat models, security and trust assumptions, and attacks against a virtualized system at the different layers—in particular, hardware, virtualization, OS, and application.
Illiano VP, Lupu EC, 2015, Detecting Malicious Data Injections in Wireless Sensor Networks: aSurvey, ACM Computing Surveys, Vol: 48, ISSN: 1557-7341
Wireless Sensor Networks are widely advocated to monitor environmental parameters, structural integrity of the built environment and use of urban spaces, services and utilities. However, embedded sensors are vulnerable to compromise by external actors through malware but also through their wireless and physical interfaces. Compromised sensors can be made to report false measurements with the aim to produce inappropriate and potentially dangerous responses. Such malicious data injections can be particularly difficult to detect if multiple sensors have been compromised as they could emulate plausible sensor behaviour such as failures or detection of events where none occur. This survey reviews the related work on malicious data injection in wireless sensor networks, derives general principles and a classification of approaches within this domain, compares related studies and identifies areas that require further investigation.
Illiano V, Lupu E, 2015, Detecting Malicious Data Injections in Event Detection Wireless Sensor Networks, IEEE Transactions on Network and Service Management, Vol: 12, Pages: 496-510, ISSN: 1932-4537
Wireless sensor networks (WSNs) are vulnerable and can be maliciously compromised, either physically or remotely, with potentially devastating effects. When sensor networks are used to detect the occurrence of events such as fires, intruders, or heart attacks, malicious data can be injected to create fake events, and thus trigger an undesired response, or to mask the occurrence of actual events. We propose a novel algorithm to identify malicious data injections and build measurement estimates that are resistant to several compromised sensors even when they collude in the attack. We also propose a methodology to apply this algorithm in different application contexts and evaluate its results on three different datasets drawn from distinct WSN deployments. This leads us to identify different tradeoffs in the design of such algorithms and how they are influenced by the application context.
Schaeffer-Filho A, Lupu EC, Sloman MS, 2015, Federating Policy-Driven Autonomous Systems: Interaction Specification and Management Patterns, Journal of Network and Systems Management, Vol: 23, Pages: 753-793
Ubiquitous systems and applications involve interactions between multiple autonomous entities—for example, robots in a mobile ad-hoc network collaborating to achieve a goal, communications between teams of emergency workers involved in disaster relief operations or interactions between patients’ and healthcare workers’ mobile devices. We have previously proposed the Self-Managed Cell (SMC) as an architectural pattern for managing autonomous ubiquitous systems that comprise both hardware and software components and that implement policy-based adaptation strategies. We have also shown how basic management interactions between autonomous SMCs can be realised through exchanges of notifications and policies, to effectively program management and context-aware adaptations. We present here how autonomous SMCs can be composed and federated into complex structures through the systematic composition of interaction patterns. By composing simpler abstractions as building blocks of more complex interactions it is possible to leverage commonalities across the structural, control and communication views to manage a broad variety of composite autonomous systems including peer-to-peer collaborations, federations and aggregations with varying degrees of devolution of control. Although the approach is more broadly applicable, we focus on systems where declarative policies are used to specify adaptation and on context-aware ubiquitous systems that present some degree of autonomy in the physical world, such as body sensor networks and autonomous vehicles. Finally, we present a formalisation of our model that allows a rigorous verification of the properties satisfied by the SMC interactions before policies are deployed in physical devices.
Lupu EC, Sgandurra D, Di Cerbo F, et al., 2015, Sharing Data Through Confidential Clouds: An Architectural Perspective, 37th International Conference on Software Engineering (ICSE 2015), Publisher: IEEE
Cloud and mobile are two major computing paradigms that are rapidly converging. However, these models still lack a way to manage the dissemination and control of personal and business-related data. To this end, we propose a framework to control the sharing, dissemination and usage of data based on mutually agreed Data Sharing Agreements (DSAs). These agreements are enforced uniformly, and end-to-end, both on Cloud and mobile platforms, and may reflect legal, contractual or user-defined preferences. We introduce an abstraction layer that makes available the enforcement functionality across different types of nodes whilst hiding the distribution of components and platform specifics. We also discuss a set of different types of nodes that may run such a layer.
Lupu EC, Rodrigues P, Kramer J, 2015, Compositional Reliability Analysis for Probabilistic Component Automata, 37th International Workshop on Modeling in Software Engineering (ICSE 15), Publisher: Association for Computing Machinery/IEEE
In this paper we propose a modelling formalism, Probabilistic Component Automata (PCA), as a probabilistic extension to Interface Automata to represent the probabilistic behaviour of component-based systems. The aim is to supportcomposition of component-based models for both behaviour and non-functional properties such as reliability. We show how additional primitives for modelling failure scenarios, failure handling and failure propagation, as well as other algebraic operators, can be combined with models of the system architecture to automatically construct a system model by composing models of its subcomponents. The approach is supported by the tool LTSA-PCA, an extension of LTSA, which generates a composite DTMC model. The reliability of a particular system configurationcan then be automatically analysed based on the corresponding composite model using the PRISM model checker. This approach facilitates configurability and adaptation in which the software configuration of components and the associated composition of component models are changed at run time.
Lupu EC, Rodrigues P, Kramer J, 2015, On Re-Assembling Self-Managed Components, 2015 IFIP/IEEE International Symposium on Integrated Network Management (IM 2015), Publisher: IEEE
Self-managed systems need to adapt to changes in requirements and in operational conditions. New components or services may become available, others may become unreliable or fail. Non-functional aspects, such as reliability or other quality-of service parameters usually drive the selection of new architectural configurations. However, in existing approaches, the link between non-functional aspects and software models is established through manual annotations that require human intervention on each re-configuration and adaptation is enacted through fixed rules that require anticipation of all possible changes. We proposehere a methodology to automatically re-assemble services and component-based applications to preserve their reliability. To achieve this we define architectural and behavioural models that are composable, account for non-functional aspects andcorrespond closely to the implementation. Our approach enables autonomous components to locally adapt and control their internal configuration whilst exposing interface models to upstream components.
Garcia-Alfaro J, Herrera-Joancomartí J, Lupu E, et al., 2015, Data privacy management, autonomous spontaneous security, and security assurance, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), Vol: 8872, ISSN: 0302-9743
Lupu E, Posegga J, 2015, Foreword from the SETOP 2014 program chairs, ISBN: 9783319170152
Payton J, Labrador M, Silverston T, et al., 2014, CROWDSENSING'14: The first international workshop on crowdsensing methods, techniques, and applications, 2014 - Welcome and committees, 2014 IEEE International Conference on Pervasive Computing and Communication Workshops, PERCOM WORKSHOPS 2014
© 2015 Imperial College Press. All rights reserved. With the emergence of ubiquitous computing, innovations in mobile phones are increasingly changing the way users lead their lives. To make mobile devices adaptive and able to autonomously respond to changes in user behaviours, machine learning techniques can be deployed to learn behaviour from empirical data. Learning outcomes should be rulebased enforcement policies that can pervasively manage the devices, and at the same time facilitate user validation when and if required. In this chapter we demonstrate the feasibility of non-monotonic Inductive Logic Programming (ILP) in the automated task of extraction of user behaviour rules through data acquisition in the domain of mobile phones. This is a challenging task as real mobile datasets are highly noisy and unevenly distributed. We present two applications, one based on an existing dataset collected as part of the Reality Mining group, and the other generated by a mobile phone application called ULearn that we have developed to facilitate a realistic evaluation of the accuracy of the learning outcome.
Rivera-Rubio J, Alexiou I, Bharath A, et al., 2014, Associating locations from wearable cameras
© 2014. The copyright of this document resides with its authors. In this paper, we address a specific use-case of wearable or hand-held camera technology: indoor navigation. We explore the possibility of crowd-sourcing navigational data in the form of video sequences that are captured from wearable or hand-held cameras. Without using geometric inference techniques (such as SLAM), we test video data for navigational content, and algorithms for extracting that content. We do not include tracking in this evaluation; our purpose is to explore the hypothesis that visual content, on its own, contains cues that can be mined to infer a person's location. We test this hypothesis through estimating positional error distributions inferred during one journey with respect to other journeys along the same approximate path. The contributions of this work are threefold. First, we propose alternative methods for video feature extraction that identify candidate matches between query sequences and a database of sequences from journeys made at different times. Secondly, we suggest an evaluation methodology that estimates the error distributions in inferred position with respect to a ground truth. We assess and compare standard approaches from the field of image retrieval, such as SIFT and HOG3D, to establish associations between frames. The final contribution is a publicly available database comprising over 90,000 frames of video-sequences with positional ground-truth. The data was acquired along more than 3 km worth of indoor journeys with a hand-held device (Nexus 4) and a wearable device (Google Glass).
Dickens L, Lupu EC, 2014, On Efficient Meta-Data Collection for Crowdsensing, First International Workshop on Crowdsensing Methods, Techniques, and Applications, Publisher: IEEE, Pages: 62-67
Participatory sensing applications have an on-going requirement to turn raw data into useful knowledge, and to achieve this, many rely on prompt human generated meta-data to support and/or validate the primary data payload. These human contributions are inherently error prone and subject to bias and inaccuracies, so multiple overlapping labels are needed to cross-validate one another. While probabilistic inference can be used to reduce the required label overlap, there is still a need to minimise the overhead and improve the accuracy of timely label collection. We present three general algorithms for efficient human meta-data collection, which support different constraints on how the central authority collects contributions, and three methods to intelligently pair annotators with tasks based on formal information theoretic principles. We test our methods’ performance on challenging synthetic data-sets, based on real data, and show that our algorithms can significantly lower the cost and improve the accuracy of human meta-data labelling, with little or no impact on time.
Rodrigues P, Lupu EC, Kramer JK, 2014, LTSA-PCA: Tool Support for Compositional Reliability Analysis, ICSE Companion 2014, Publisher: ACM, Pages: 548-551
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.