Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

233 results found

Damianou N, Dulay N, Lupu EC, Sloman MSet al., 2000, Ponder: A Language for Specifying Security and Management Policies for Distributed Systems, The Language Specification - Version 2.2, Imperial College, Department of Computing, Publisher: Imperial College, Department of Computing

Report

Dulay N, Lupu EC, Sloman MS, Damianou Net al., 2000, Towards a Runtime Object Model for the Ponder Policy Language, 7th Workshop of the Open View University Association (OVUA 2000), Santorini, Greece

Conference paper

Lupu EC, Sloman MS, 1999, Conflicts in Policy-based Distributed Systems Management, IEEE Trans.on Software Engineering, Vol: 25, Pages: 852-869, ISSN: 0098-5589

Modem distributed systems contain a large number of objects and must be capable of evolving, without shutting down the complete system, to cater for changing requirements. There is a need for distributed, automated management agents whose behavior also has to dynamically change to reflect the evolution of the system being managed. Policies are a means of specifying and influencing management behavior within a distributed system, without coding the behavior into the manager agents. Our approach is aimed at specifying implementable policies, although policies may be initially specified at the organizational level (c.f. goals) and then refined to implementable actions. We are concerned with two types of policies. Authorization policies specify what activities a manager is permitted or forbidden to do to a set of target objects and are similar to security access-control policies. Obligation policies specify what activities a manager must or must not do to a set of target objects and essentially define the duties of a manager. Conflicts can arise in the set of policies. For example, an obligation policy may define an activity which is forbidden by a negative authorization policy; there may be two authorization policies which permit and forbid an activity or two policies permitting the same manager to sign checks and approve payments may conflict with an external principle of separation of duties. Conflicts may also arise during the refinement process between the high-level goals and the implementable policies. The system may have to cater for conflicts such as exceptions to normal authorization policies. This paper reviews policy conflicts, focusing on the problems of conflict detection and resolution. We discuss the Various precedence relationships that can be established between policies in order to allow inconsistent policies to coexist within the system and present a conflict analysis tool which forms part of a role-based management framework. Software development an

Journal article

Moffett JD, Lupu EC, 1999, The uses of role hierarchies in access control, 4th ACM Workshop on Role-Based Access Control, Publisher: ASSOC COMPUTING MACHINERY, Pages: 153-160

Conference paper

Lupu EC, Sloman MS, Milosevic Z, 1999, Use of Roles and Policies for Specifying and Managing a Virtual Enterprise, 9th International Workshop on Research Issues on Data Engineering: Information Technology for Virtual Enterprises(RIDE - VE '99)

Conference paper

Sloman M, Mazumdar S, Lupu EC, 1999, Proceedings of the Sixth IFIP/IEEE International Symposium on Integrated Network Management, Publisher: IEEE

Book

, 1999, 1999 IEEE/IFIP International Symposium on Integrated Network Management, IM 1999, Boston, USA, May 24-28, 1999. Proceedings, Publisher: IEEE

Conference paper

Sloman MS, Lupu EC, 1999, Policy Specification for Programmable Networks, Proceedings of First International Working Conference on Active Networks (IWAN'99), Berlin, Publisher: Springer Verlag, Pages: 73-84

Conference paper

Eisenbach S, Meidl K, Rizkallah H, Lupu ECet al., 1999, Can Corba save a fringe language from becoming obsolete?, DAIS'99 Second IFIP WG 6.1 International Working Conference on Distributed Applications and Interoperable Systems, Helsinki

Conference paper

Lupu E, Sloman M, 1997, Conflict analysis for management policies, 5th IFIP/IEEE International Symposium on Integrated Network Management (IM'97), Publisher: Chapman-Hall, Pages: 430-443

Policies are a means of influencing management behaviour within a distributed system, without coding the behaviour into the managers. Authorisation policies specify what activities a manager is permitted or forbidden to do to a set of target objects and obligation policies specify what activities a manager must or must not do to a set of target objects. Conflicts can arise in the set of policies. For example an obligation policy may define an activity which is forbidden by a negative authorisation policy; there may be two authorisation policies which permit and forbid an activity or two policies permitting the same manager to sign cheques and approve payments may conflict with an external principle of separation of duties. This paper reviews the policy conflicts which may arise in a large-scale distributed system and describes a conflict analysis tool which forms part of a Role Based Management framework. Management policies are specified with regard to domains of objects and conflicts potentially arise when there are overlaps between domains. It is not desirable or possible to prevent overlaps and they do not always result in conflicts. We discuss the various techniques which can be used to determine which conflicts are important and so should be indicated to the user and which potential conflicts should be ignored because of precedence relationships between the policies. This reduces the set of potential conflicts that a user would have to resolve and avoids undesired changes of the policy specification or domain membership.

Conference paper

Lupu EC, Sloman MS, 1997, Reconciling role based management and role based access control, RBAC '97 Second Role Based Control Workshop, George Mason University, Virginia, Pages: 135-141

Conference paper

Lupu E, Sloman M, 1997, A policy based role object model, 1st International Enterprise Distributed Object Computing Workshop (EDOC 97), Publisher: IEEE, COMPUTER SOC PRESS, Pages: 36-47

Enterprise roles define the duties and responsibilities of the individuals which are assigned to them. This paper introduces a framework for the management of large distributed systems which makes use of the concepts developed in role theory. Our concept of a role groups the specifications of management policies which define the rights and dirties corresponding to that role. Individuals may then be assigned to or withdrawn from a role, to enable rapid and flexible organisational change, without altering the specification of the policies. We extend this role concept to include relationships as means of specifying required interactions, duties and rights between related roles. Organisations may contain large numbers of similar roles with multiple relationships between them, so there is a need for reuse of specifications. Role and relationship classes permit multiple instantiation and inheritance is Eased for incremental extension of the organisational structure with minimal specification effort. We also briefly examine consistency and auditing issues related to this role framework.

Conference paper

Lupu EC, Sloman MS, Yialelis N, 1997, Policy based roles for distributed systems security, HP-Openview University Association (HP-OVUA) Plenary Workshop (Madrid)

Conference paper

Sloman M, Lupu E, 1997, Towards a Role-based Framework for Distributed Systems Management, Journal of Network and Systems Management, Vol: 5, Pages: 5-30, ISSN: 1064-7570

Journal article

Yialelis N, Lupu EC, Sloman MS, 1996, Role Based Security for Distributed Object Systems, IEEE Fifth Workshops on Enabling Technologies : Infrastructure for Collaborative Enterprises, Stanford University

Conference paper

Yialelis N, Lupu EC, Sloman MS, 1996, Role Based Security for Distributed Object Systems, IEEE Fifth Workshops on Enabling Technologies : Infrastructure for Collaborative Enterprises, Stanford University

Conference paper

Lupu EC, Sloman MS, Yialelis N, 1995, A Policy Based Role Framework for Access Control, First ACM/NIST Role Based Access Control Workshop (USA), Publisher: ACM Press

Conference paper

Paudice A, Muñoz-González L, Gyorgy A, Lupu ECet al., Detection of Adversarial Training Examples in Poisoning Attacks through Anomaly Detection

Machine learning has become an important component for many systems andapplications including computer vision, spam filtering, malware and networkintrusion detection, among others. Despite the capabilities of machine learningalgorithms to extract valuable information from data and produce accuratepredictions, it has been shown that these algorithms are vulnerable to attacks.Data poisoning is one of the most relevant security threats against machinelearning systems, where attackers can subvert the learning process by injectingmalicious samples in the training data. Recent work in adversarial machinelearning has shown that the so-called optimal attack strategies cansuccessfully poison linear classifiers, degrading the performance of the systemdramatically after compromising a small fraction of the training dataset. Inthis paper we propose a defence mechanism to mitigate the effect of theseoptimal poisoning attacks based on outlier detection. We show empirically thatthe adversarial examples generated by these attack strategies are quitedifferent from genuine points, as no detectability constrains are considered tocraft the attack. Hence, they can be detected with an appropriate pre-filteringof the training dataset.

Working paper

Karafili E, Wang L, Lupu EC, An Argumentation-Based Approach to Assist in the Investigation and Attribution of Cyber-Attacks

We expect an increase in frequency and severity of cyber-attacks that comesalong with the need of efficient security countermeasures. The process ofattributing a cyber-attack helps in constructing efficient and targetedmitigative and preventive security measures. In this work, we propose anargumentation-based reasoner (ABR) that helps the analyst during the analysisof forensic evidence and the attribution process. Given the evidence collectedfrom the cyber-attack, our reasoner helps the analyst to identify who performedthe attack and suggests the analyst where to focus further analyses by givinghints of the missing evidence, or further investigation paths to follow. ABR isthe first automatic reasoner that analyzes and attributes cyber-attacks byusing technical and social evidence, as well as incomplete and conflictinginformation. ABR was tested on realistic cyber-attacks cases.

Working paper

Co KT, Muñoz-González L, Kanthan L, Glocker B, Lupu ECet al., Universal Adversarial Perturbations to Understand Robustness of Texture vs. Shape-biased Training

Convolutional Neural Networks (CNNs) used on image classification tasks suchas ImageNet have been shown to be biased towards recognizing textures ratherthan shapes. Recent work has attempted to alleviate this by augmenting thetraining dataset with shape-based examples to create Stylized-ImageNet.However, in this paper we show that models trained on this dataset remainvulnerable to Universal Adversarial Perturbations (UAPs). We use UAPs toevaluate and compare the robustness of CNN models with varying degrees ofshape-based training. We also find that a posteriori fine-tuning on ImageNetnegates features learned from training on Stylized-ImageNet. This study revealsan important limitation and reiterates the need for further research intounderstanding the robustness of CNNs for visual recognition.

Journal article

Muñoz-González L, Pfitzner B, Russo M, Carnerero-Cano J, Lupu ECet al., Poisoning Attacks with Generative Adversarial Nets

Machine learning algorithms are vulnerable to poisoning attacks: An adversarycan inject malicious points in the training dataset to influence the learningprocess and degrade the algorithm's performance. Optimal poisoning attacks havealready been proposed to evaluate worst-case scenarios, modelling attacks as abi-level optimization problem. Solving these problems is computationallydemanding and has limited applicability for some models such as deep networks.In this paper we introduce a novel generative model to craft systematicpoisoning attacks against machine learning classifiers generating adversarialtraining examples, i.e. samples that look like genuine data points but thatdegrade the classifier's accuracy when used for training. We propose aGenerative Adversarial Net with three components: generator, discriminator, andthe target classifier. This approach allows us to model naturally thedetectability constrains that can be expected in realistic attacks and toidentify the regions of the underlying data distribution that can be morevulnerable to data poisoning. Our experimental evaluation shows theeffectiveness of our attack to compromise machine learning classifiers,including deep networks.

Journal article

Muñoz-González L, Co KT, Lupu EC, Byzantine-Robust Federated Machine Learning through Adaptive Model Averaging

Federated learning enables training collaborative machine learning models atscale with many participants whilst preserving the privacy of their datasets.Standard federated learning techniques are vulnerable to Byzantine failures,biased local datasets, and poisoning attacks. In this paper we introduceAdaptive Federated Averaging, a novel algorithm for robust federated learningthat is designed to detect failures, attacks, and bad updates provided byparticipants in a collaborative model. We propose a Hidden Markov Model tomodel and learn the quality of model updates provided by each participantduring training. In contrast to existing robust federated learning schemes, wepropose a robust aggregation rule that detects and discards bad or maliciouslocal model updates at each training iteration. This includes a mechanism thatblocks unwanted participants, which also increases the computational andcommunication efficiency. Our experimental evaluation on 4 real datasets showthat our algorithm is significantly more robust to faulty, noisy and maliciousparticipants, whilst being computationally more efficient than otherstate-of-the-art robust federated learning methods such as Multi-KRUM andcoordinate-wise median.

Journal article

Co KT, Muñoz-González L, Lupu EC, Sensitivity of Deep Convolutional Networks to Gabor Noise

Deep Convolutional Networks (DCNs) have been shown to be sensitive toUniversal Adversarial Perturbations (UAPs): input-agnostic perturbations thatfool a model on large portions of a dataset. These UAPs exhibit interestingvisual patterns, but this phenomena is, as yet, poorly understood. Our workshows that visually similar procedural noise patterns also act as UAPs. Inparticular, we demonstrate that different DCN architectures are sensitive toGabor noise patterns. This behaviour, its causes, and implications deservefurther in-depth study.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00155030&person=true&page=8&amp%3bid=00155030&amp%3brespub-action=search.html&amp%3bperson=true&respub-action=search.html&amp%3bpage=8