Imperial College London

DrErisaKarafili

Faculty of EngineeringDepartment of Computing

Marie Curie Individual Fellow
 
 
 
//

Contact

 

e.karafili Website

 
 
//

Location

 

502Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

24 results found

Karafili E, Spanaki K, Lupu E, 2019, Access Control and Quality Attributes of Open Data: Applications and Techniques, Workshop on Quality of Open Data, Publisher: Springer Verlag (Germany), Pages: 603-614, ISSN: 1865-1348

Open Datasets provide one of the most popular ways to ac- quire insight and information about individuals, organizations and multiple streams of knowledge. Exploring Open Datasets by applying comprehensive and rigorous techniques for data processing can provide the ground for innovation and value for everyone if the data are handled in a legal and controlled way. In our study, we propose an argumentation and abductive reasoning approach for data processing which is based on the data quality background. Explicitly, we draw on the literature of data management and quality for the attributes of the data, and we extend this background through the development of our techniques. Our aim is to provide herein a brief overview of the data quality aspects, as well as indicative applications and examples of our approach. Our overall objective is to bring serious intent and propose a structured way for access control and processing of open data with a focus on the data quality aspects.

Conference paper

Karafili E, Sgandurra D, Lupu E, A logic-based reasoner for discovering authentication vulnerabilities between interconnected accounts, 1st International Workshop on Emerging Technologies for Authorization and Authentication, Publisher: Springer Verlag, ISSN: 0302-9743

With users being more reliant on online services for their daily activities, there is an increasing risk for them to be threatened by cyber-attacks harvesting their personal information or banking details. These attacks are often facilitated by the strong interconnectivity that exists between online accounts, in particular due to the presence of shared (e.g., replicated) pieces of user information across different accounts. In addition, a significant proportion of users employs pieces of information, e.g. used to recover access to an account, that are easily obtainable from their social networks accounts, and hence are vulnerable to correlation attacks, where a malicious attacker is either able to perform password reset attacks or take full control of user accounts.This paper proposes the use of verification techniques to analyse the possible vulnerabilities that arises from shared pieces of information among interconnected online accounts. Our primary contributions include a logic-based reasoner that is able to discover vulnerable online accounts, and a corresponding tool that provides modelling of user ac- counts, their interconnections, and vulnerabilities. Finally, the tool allows users to perform security checks of their online accounts and suggests possible countermeasures to reduce the risk of compromise.

Conference paper

Cullen A, Karafili E, Pilgrim A, Williams C, Lupu Eet al., Policy support for autonomous swarms of drones, 1st International Workshop on Emerging Technologies for Authorization and Authentication, Publisher: Springer Verlag, ISSN: 0302-9743

In recent years drones have become more widely used in military and non-military applications. Automation of these drones will become more important as their use increases. Individual drones acting autonomously will be able to achieve some tasks, but swarms of autonomous drones working together will be able to achieve much more complex tasks and be able to better adapt to changing environments. In this paper we describe an example scenario involving a swarm of drones from a military coalition and civil/humanitarian organisations that are working collaboratively to monitor areas at risk of flooding. We provide a definition of a swarm and how they can operate by exchanging messages. We define a flexible set of policies that are applicable to our scenario that can be easily extended to other scenarios or policy paradigms. These policies ensure that the swarms of drones behave as expected (e.g., for safety and security). Finally we discuss the challenges and limitations around policies for autonomous swarms and how new research, such as generative policies, can aid in solving these limitations.

Conference paper

Karafili E, Wang L, Kakas A, Lupu Eet al., 2018, Helping forensic analysts to attribute cyber-attacks: an argumentation-based reasoner, International Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2018), Publisher: Springer Verlag, Pages: 510-518, ISSN: 0302-9743

Discovering who performed a cyber-attack or from where it originated is essential in order to determine an appropriate response and future risk mitigation measures. In this work, we propose a novel argumentation-based reasoner for analyzing and attributing cyber-attacks that combines both technical and social evidence. Our reasoner helps the digital forensics analyst during the analysis of the forensic evidence by providing to the analyst the possible culprits of the attack, new derived evidence, hints about missing evidence, and insights about other paths of investigation. The proposed reasoner is flexible, deals with conflicting and incomplete evidence,and was tested on real cyber-attacks cases.

Conference paper

Karafili E, Cristani M, Viganò L, A Formal Approach to Analyzing Cyber-Forensics Evidence, European Symposium on Research in Computer Security (ESORICS) 2018, Publisher: Springer Verlag, ISSN: 0302-9743

The frequency and harmfulness of cyber-attacks are increasing every day, and with them also the amount of data that the cyber-forensics analysts need to collect and analyze. In this paper, we propose a formal analysis process that allows an analyst to filter the enormous amount of evidence collected and either identify crucial information about the attack (e.g., when it occurred, its culprit, its target) or, at the very least, perform a pre-analysis to reduce the complexity of the problem in order to then draw conclusions more swiftly and efficiently. We introduce the Evidence Logic EL for representing simple and derived pieces of evidence from different sources. We propose a procedure, based on monotonic reasoning, that rewrites the pieces of evidence with the use of tableau rules,based on relations of trust between sources and the reasoning behind the derived evidence, and yields a consistent set of pieces of evidence. As proof of concept, we apply our analysis process to a concrete cyber-forensics case study.

Conference paper

Arunkumar S, Pipes S, Makaya C, Bertino E, Karafili E, Lupu E, Williams Cet al., 2018, Next generation firewalls for dynamic coalitions, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

Firewalls represent a critical security building block for networks as they monitor and control incoming and outgoing network traffic based on the enforcement of predetermined secu- rity rules, referred to as firewall rules. Firewalls are constantly being improved to enhance network security. From being a simple filtering device, firewall has been evolved to operate in conjunc- tion in intrusion detection and prevention systems. This paper reviews the existing firewall policies and assesses their application in highly dynamic networks such as coalitions networks. The paper also describe the need for the next-generation firewall policies and how the generative policy model can be leveraged.

Conference paper

Karafili E, Lupu E, Cullen A, Williams B, Arunkumar S, Calo Set al., 2018, Improving data sharing in data rich environments, 1st IEEE Big Data International Workshop on Policy-based Autonomic Data Governance, IEEE BigData, Publisher: IEEE

The increasing use of big data comes along with the problem of ensuring correct and secure data access. There is a need to maximise the data dissemination whilst controlling their access. Depending on the type of users different qualities and parts of data are shared. We introduce an alteration mechanism, more precisely a restriction one, based on a policy analysis language. The alteration reflects the level of trust and relations the users have, and are represented as policies inside the data sharing agreements. These agreements are attached to the data and are enforced every time the data are accessed, used or shared. We show the use of our alteration mechanism with a military use case, where different parties are involved during the missions, and they have different relations of trust and partnership.

Conference paper

Karafili E, Spanaki K, Lupu E, 2017, An Argumentation Reasoning Approach for Data Processing, Computers in Industry, Vol: 94, Pages: 52-61, ISSN: 0166-3615

Data-intensive environments enable us to capture information and knowledge about the physical surroundings, to optimise our resources, enjoy personalised services and gain unprecedented insights into our lives. However, to obtain these endeavours extracted from the data, this data should be generated, collected and the insight should be exploited. Following an argumentation reasoning approach for data processing and building on the theoretical background of data management, we highlight the importance of data sharing agreements (DSAs) and quality attributes for the proposed data processing mechanism. The proposed approach is taking into account the DSAs and usage policies as well as the quality attributes of the data, which were previously neglected compared to existing methods in the data processing and management field. Previous research provided techniques towards this direction; however, a more intensive research approach for processing techniques should be introduced for the future to enhance the value creation from the data and new strategies should be formed around this data generated daily from various devices and sources.

Journal article

Cullen A, Williams B, Bertino E, Arunkumar S, Karafili E, Lupu Eet al., 2017, Mission support for drones: a policy based approach, International Workshop on Micro Aerial Vehicle Networks, Systems, and Applications (DRONET 17), Publisher: ACM, Pages: 7-12

We examine the impact of increasing autonomy on the use of airborne drones in joint operations by collaborative parties. As the degree of automation employed increases towards the level implied by the term ‘autonomous’, it becomes apparent that existing control mechanisms are insufficiently flexible. Using an architecture introduced by Bertino et al. in [1] and Verma et al. in [2], we consider the use of dynamic policy modification as a means to adjust to rapidly evolving scenarios. We show mechanisms which allow this approach to improve the effectiveness of operations without compromise to security or safety.

Conference paper

Karafili E, Lupu E, 2017, Enabling Data Sharing in Contextual Environments: Policy Representation and Analysis, ACM Symposium on Access Control Models and Technologies (SACMAT), Publisher: ACM, Pages: 231-238

Internet of Things environments enable us to capture more and more data about the physical environment we live in and about ourselves. The data enable us to optimise resources, personalise services and offer unprecedented insights into our lives. However, to achieve these insights data need to be shared (and sometimes sold) between organisations imposing rights and obligations upon the sharing parties and in accordance with multiple layers of sometimes conflicting legislation at international, national and organisational levels. In this work, we show how such rules can be captured in a formal representation called ``Data Sharing Agreements''. We introduce the use of abductive reasoning and argumentation based techniques to work with context dependent rules, detect inconsistencies between them, and resolve the inconsistencies by assigning priorities to the rules. We show how through the use of argumentation based techniques use-cases taken from real life application are handled flexibly addressing trade-offs between confidentiality, privacy, availability and safety.

Conference paper

Karafili E, Lupu E, Arunkumar S, Bertino Eet al., Argumentation-based policy analysis for drone systems, Dais Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

The use of drone systems is increasing especially in dangerous environments where manned operations are too risky. Different entities are involved in drone systems’ missions and they come along with their vast varieties of specifications. The behaviour of the system is described by its set of policies that should satisfy the requirements and specifications of the different entities and the system itself. Deciding the policies that describe the actions to be taken is not trivial, as the different requirements and specifications can lead to conflicting actions. We introduce an argumentation-based policy analysis that captures conflicts for which properties have been specified. Our solution allows different rules to take priority in different contexts. We propose a decision making process that solves the detected conflicts by using a dynamic conflict resolution based on the priorities between rules. We apply our solution to two case studies where drone systems are used for military and disaster rescue operations.

Conference paper

Karafili E, Pipes S, Lupu E, Verification techniques for policy based systems, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

Verification techniques are applied to policy based systems to ensure design correctness and to aid in the discovery of errors at an early stage of the development life cycle. A primary goal of policy verification is to evaluate the policy’s validity. Other analyses on policy based systems include the identification of conflicting policies and policy efficiency evalu- ation and improvement. In this work, we present a discussion and classification of recent research on verification techniques for policy based systems. We analyse several techniques and identify popular supporting verification tools. An evaluation of the benefits and drawbacks of the existing policy analyses is made. Some of the common identified problems were the significant need of computational power, the limitation of the techniques to particular policy model, which restrict their ex- tension to other policy models and the lack of efficient conflicts resolution methods. We use the evaluation results for discussing the further challenges and future research directions that will be faced by policy verification techniques. In particular, we discuss specific requirements concerning verification techniques for coalition policies systems and autonomous decision making.

Conference paper

Felmlee D, Lupu E, McMillan C, Karafili E, Bertino Eet al., Decision-making in policy governed human-autonomous systems teams, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike.In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.

Conference paper

Tomazzoli C, Cristani M, Karafili E, Olivieri Fet al., 2017, Non-monotonic reasoning rules for energy efficiency, Journal of Ambient Intelligence and Smart Environments, Vol: 9, Pages: 345-360, ISSN: 1876-1364

Conflicting rules and rules with exceptions are very common in natural language specification employed to describe the behaviour of devices operating in a real-world context. This is common exactly because those specifications are processed by humans, and humans apply common sense and strategic reasoning about those rules to resolve the conflicts. In this paper, we deal with the challenge of providing, step by step, a model of energy saving rule specification and processing methods that are used to reduce the consumptions of a system of devices, by preventing energy waste. We argue that a very promising non-monotonic approach to such a problem can lie upon Defeasible Logic, following therefore an approach that has shown success in the current literature about usage of this logic for conflict rule resolution and for human–computer interaction in complex systems. Starting with rules specified at an abstract level, but compatibly with the natural aspects of such a specification (including temporal and power absorption constraints), we provide a formalism that generates the extension of a basic Defeasible Logic, which corresponds to turned on or off devices.

Journal article

Karafili E, Kakas A, Spanoudakis N, Lupu Eet al., Argumentation-based security for social good, AAAI Spring Symposium 2017, AI for the Social Good, Publisher: AAAI

The increase of connectivity and the impact it has in every day life is raising new and existing security problems that are becoming important for social good. We introduce two particular problems: cyber attack attribution and regulatory data sharing. For both problems, decisions about which rules to apply, should be taken under incomplete and context dependent information. The solution we propose is based on argumentation reasoning, that isa well suited technique for implementing decision making mechanisms under conflicting and incomplete information. Our proposal permits us to identify the attacker of a cyber attack and decide the regulation rule that should be used while using and sharing data. We illustrate our solution through concrete examples.

Conference paper

Sgandurra D, Karafili E, Lupu EC, 2016, Formalizing Threat Models for Virtualized Systems, Data and Applications Security and Privacy (DBSec 2016), Publisher: Springer International Publishing, Pages: 251-267, ISSN: 0302-9743

We propose a framework, called FATHoM (FormAlizing THreat Models), to define threat models for virtualized systems. For each component of a virtualized system, we specify a set of security properties that defines its control responsibility, its vulnerability and protection states. Relations are used to represent how assumptions made about a component’s security state restrict the assumptions that can be made on the other components. FATHoM includes a set of rules to compute the derived security states from the assumptions and the components’ relations. A further set of relations and rules is used to define how to protect the derived vulnerable components. The resulting system is then analysed, among others, for consistency of the threat model. We have developed a tool that implements FATHoM, and have validated it with use-cases adapted from the literature.

Conference paper

Cristani M, Karafili E, Olivieri F, Tomazzoli Cet al., 2016, Defeasible Reasoning about Electric Consumptions, 30th IEEE International Conference on Advanced Information Networking and Applications (IEEE AINA), Publisher: IEEE, Pages: 885-892, ISSN: 1550-445X

Conference paper

Cristani M, Karafili E, Tomazzoli C, 2015, Improving Energy Saving Techniques by Ambient Intelligence Scheduling, IEEE 29th International Conference on Advanced Information Networking and Applications (IEEE AINA), Publisher: IEEE, Pages: 324-331, ISSN: 1550-445X

Conference paper

Karafili E, Nielson HR, Nielson F, 2015, How to Trust the Re-use of Data, 11th International Workshop on Security and Trust Management (STM), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 72-88, ISSN: 0302-9743

Conference paper

Cristani M, Karafili E, Vigano L, 2014, Tableau systems for reasoning about risk, JOURNAL OF AMBIENT INTELLIGENCE AND HUMANIZED COMPUTING, Vol: 5, Pages: 215-247, ISSN: 1868-5137

Journal article

Cristani M, Karafili E, Tomazzoli C, 2014, Energy Saving by Ambient Intelligence Techniques, International Conference on Network-Based Information Systems (NBiS), Publisher: IEEE, Pages: 157-164

Conference paper

Cristani M, Karafili E, Vigano L, 2013, A Complete Tableau Procedure for Risk Analysis, 8th International Conference on Risks and Security of Internet and Systems (CRiSIS), Publisher: IEEE, ISSN: 2151-4763

Conference paper

Cristani M, Karafili E, Vigano L, 2011, BLOCKING UNDERHAND ATTACKS BY HIDDEN COALITIONS, 3rd International Conference on Agents and Artificial Intelligence, Publisher: INSTICC-INST SYST TECHNOLOGIES INFORMATION CONTROL & COMMUNICATION, Pages: 311-320

Conference paper

Cristani M, Karafili E, ViganĂ² L, 2010, Blocking Underhand Attacks by Hidden Coalitions (Extended Version)

Similar to what happens between humans in the real world, in open multi-agentsystems distributed over the Internet, such as online social networks or wikitechnologies, agents often form coalitions by agreeing to act as a whole inorder to achieve certain common goals. However, agent coalitions are not alwaysa desirable feature of a system, as malicious or corrupt agents may collaboratein order to subvert or attack the system. In this paper, we consider theproblem of hidden coalitions, whose existence and the purposes they aim toachieve are not known to the system, and which carry out so-called underhandattacks. We give a first approach to hidden coalitions by introducing adeterministic method that blocks the actions of potentially dangerous agents,i.e. possibly belonging to such coalitions. We also give a non-deterministicversion of this method that blocks the smallest set of potentially dangerousagents. We calculate the computational cost of our two blocking methods, andprove their soundness and completeness.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00885271&limit=30&person=true