Imperial College London

Professor Emil Lupu

Faculty of EngineeringDepartment of Computing

Professor of Computer Systems
 
 
 
//

Contact

 

e.c.lupu Website

 
 
//

Location

 

564Huxley BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

237 results found

Carnerero-Cano J, Mu noz-González L, Spencer P, Lupu ECet al., 2020, Regularisation Can Mitigate Poisoning Attacks: A Novel Analysis Based on Multiobjective Bilevel Optimisation

Other

Karafili E, Wang L, Lupu E, An argumentation-based reasoner to assist digital investigation and attribution of cyber-attacks, DFRWS EU, Publisher: Elsevier, ISSN: 1742-2876

We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security coun- termeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who per- formed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions.

Conference paper

Muñoz-González L, Lupu EC, 2019, The security of machine learning systems, AI in Cybersecurity, Publisher: Springer, Pages: 47-79

© Springer Nature Switzerland AG 2019. Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.

Book chapter

Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient attack countermeasure selection accounting for recovery and action costs, the 14th International Conference, Publisher: ACM Press

The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender.We consider a networked system providing services whose functionality depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defense, it may be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and performs well in comparison to alternatives.

Conference paper

Hau Z, Lupu EC, 2019, Exploiting correlations to detect false data injections in low-density wireless sensor networks, Cyber-Physical System Security Workshop, Publisher: ACM Press

We propose a novel framework to detect false data injections in a low-density sensor environment with heterogeneous sensor data. The proposed detection algorithm learns how each sensor's data correlates within the sensor network, and false data is identified by exploiting the anomalies in these correlations. When a large number of sensors measuring homogeneous data are deployed, data correlations in space at a fixed snapshot in time could be used as as basis to detect anomalies. Exploiting disruptions in correlations when false data is injected has been used in a high-density sensor setting and proven to be effective. With increasing adoption of sensor deployments in low-density setting, there is a need to develop detection techniques for these applications. However, with constraints on the number of sensors and different data types, we propose the use of temporal correlations across the heterogeneous data to determine the authenticity of the reported data. We also provide an adversarial model that utilizes a graphical method to devise complex attack strategies where an attacker injects coherent false data in multiple sensors to provide a false representation of the physical state of the system with the aim of subverting detection. This allows us to test the detection algorithm and assess its performance in improving the resilience of the sensor network against data integrity attacks.

Conference paper

Co KT, Munoz Gonzalez L, de Maupeou S, Lupu Eet al., Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks, 26th ACM Conference on Computer and Communications Security, Publisher: ACM

Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.

Conference paper

Chizari H, Lupu EC, 2019, Extracting randomness from the trend of IPI for cryptographic operators in implantable medical devices, IEEE Transactions on Dependable and Secure Computing, ISSN: 1545-5971

Achieving secure communication between an Implantable Medical Device (IMD) inside the body and a gateway outside the body has showed its criticality with recent reports of hackings such as in St. Jude Medical's Implantable Cardiac Devices, Johnson and Johnson insulin pumps and vulnerabilities in brain Neuro-implants. The use of asymmetric cryptography in particular is not a practical solution for IMDs due to the scarce computational and power resources, symmetric key cryptography is preferred. One of the factors in security of a symmetric cryptographic system is to use a strong key for encryption. A solution to develop such a strong key without using extensive resources in an IMD, is to extract it from the body physiological signals. In order to have a strong enough key, the physiological signal must be a strong source of randomness and InterPulse Interval (IPI) has been advised to be such that. A strong randomness source should have five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). Nevertheless, for current proposed random extraction methods from IPI these conditions (mainly last three conditions) were not examined. In this study, firstly, we proposed a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a huge dataset of IPI values (almost 900,000,000 IPIs), we showed that IPI does not have conditions of Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI value, mathematically, is impossible. Thirdly, rather than using the value of IPI, we proposed the trend of IPI as a source for a new randomness extraction method named as Martingale Randomness Extraction from IPI (MRE-IPI). We evaluat

Journal article

Spanaki K, Gürgüç Z, Mulligan C, Lupu ECet al., 2019, Organizational cloud security and control: a proactive approach, Information Technology and People, Vol: 32, Pages: 516-537, ISSN: 0959-3845

PurposeThe purpose of this paper is to unfold the perceptions around additional security in cloud environments by highlighting the importance of controlling mechanisms as an approach to the ethical use of the systems. The study focuses on the effects of the controlling mechanisms in maintaining an overall secure position for the cloud and the mediating role of the ethical behavior in this relationship.Design/methodology/approachA case study was conducted, examining the adoption of managed cloud security services as a means of control, as well as a large-scale survey with the views of IT decision makers about the effects of such adoption to the overall cloud security.FindingsThe findings indicate that there is indeed a positive relationship between the adoption of controlling mechanisms and the maintenance of overall cloud security, which increases when the users follow an ethical behavior in the use of the cloud. A framework based on the findings is built suggesting a research agenda for the future and a conceptualization of the field.Research limitations/implicationsOne of the major limitations of the study is the fact that the data collection was based on the perceptions of IT decision makers from a cross-section of industries; however the proposed framework should also be examined in industry-specific context. Although the firm size was indicated as a high influencing factor, it was not considered for this study, as the data collection targeted a range of organizations from various sizes.Originality/valueThis study extends the research of IS security behavior based on the notion that individuals (clients and providers of cloud infrastructure) are protecting something separate from themselves, in a cloud-based environment, sharing responsibility and trust with their peers. The organization in this context is focusing on managed security solutions as a proactive measurement to preserve cloud security in cloud environments.

Journal article

Co KT, Munoz Gonzalez L, Lupu E, Sensitivity of Deep Convolutional Networks to Gabor Noise, ICML 2019 Workshop

Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.

Conference paper

Collinge G, Lupu E, Munoz Gonzalez L, 2019, Defending against Poisoning Attacks in Online Learning Settings, European Symposium on Artificial Neural Networks, Publisher: ESANN

Machine learning systems are vulnerable to data poisoning, acoordinated attack where a fraction of the training dataset is manipulatedby an attacker to subvert learning. In this paper we first formulate an optimal attack strategy against online learning classifiers to assess worst-casescenarios. We also propose two defence mechanisms to mitigate the effectof online poisoning attacks by analysing the impact of the data points inthe classifier and by means of an adaptive combination of machine learning classifiers with different learning rates. Our experimental evaluationsupports the usefulness of our proposed defences to mitigate the effect ofpoisoning attacks in online learning settings.

Conference paper

Munoz Gonzalez L, Sgandurra D, Barrere Cambrun M, Lupu ECet al., 2019, Exact Inference Techniques for the Analysis of Bayesian Attack Graphs, IEEE Transactions on Dependable and Secure Computing, Vol: 16, Pages: 231-244, ISSN: 1941-0018

Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attacker's behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.

Journal article

Paudice A, Muñoz-González L, Lupu EC, 2019, Label sanitization against label flipping poisoning attacks, Nemesis'18. Workshop in Recent Advances in Adversarial Machine Learning, Publisher: Springer Verlag, Pages: 5-15, ISSN: 0302-9743

Many machine learning systems rely on data collected in thewild from untrusted sources, exposing the learning algorithms to datapoisoning. Attackers can inject malicious data in the training datasetto subvert the learning process, compromising the performance of thealgorithm producing errors in a targeted or an indiscriminate way. Labelflipping attacks are a special case of data poisoning, where the attackercan control the labels assigned to a fraction of the training points. Evenif the capabilities of the attacker are constrained, these attacks havebeen shown to be effective to significantly degrade the performance ofthe system. In this paper we propose an efficient algorithm to performoptimal label flipping poisoning attacks and a mechanism to detect andrelabel suspicious data points, mitigating the effect of such poisoningattacks.

Conference paper

Steiner RV, Lupu E, 2019, Towards more practical software-based attestation, Computer Networks, Vol: 149, Pages: 43-55, ISSN: 1389-1286

Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).

Journal article

Karafili E, Spanaki K, Lupu E, 2019, Access Control and Quality Attributes of Open Data: Applications and Techniques, Workshop on Quality of Open Data, Publisher: Springer Verlag (Germany), Pages: 603-614, ISSN: 1865-1348

Open Datasets provide one of the most popular ways to ac- quire insight and information about individuals, organizations and multiple streams of knowledge. Exploring Open Datasets by applying comprehensive and rigorous techniques for data processing can provide the ground for innovation and value for everyone if the data are handled in a legal and controlled way. In our study, we propose an argumentation and abductive reasoning approach for data processing which is based on the data quality background. Explicitly, we draw on the literature of data management and quality for the attributes of the data, and we extend this background through the development of our techniques. Our aim is to provide herein a brief overview of the data quality aspects, as well as indicative applications and examples of our approach. Our overall objective is to bring serious intent and propose a structured way for access control and processing of open data with a focus on the data quality aspects.

Conference paper

Munoz Gonzalez L, Lupu E, 2019, The Security of Machine Learning Systems, AI in Cybersecurity, Editors: Sikos

Book chapter

Mu noz-González L, Pfitzner B, Russo M, Carnerero-Cano J, Lupu ECet al., 2019, Poisoning Attacks with Generative Adversarial Nets

Other

Cullen A, Karafili E, Pilgrim A, Williams C, Lupu Eet al., 2018, Policy support for autonomous swarms of drones, 1st International Workshop on Emerging Technologies for Authorization and Authentication, Publisher: Springer Verlag, Pages: 56-70, ISSN: 0302-9743

In recent years drones have become more widely used in military and non-military applications. Automation of these drones will become more important as their use increases. Individual drones acting autonomously will be able to achieve some tasks, but swarms of autonomous drones working together will be able to achieve much more complex tasks and be able to better adapt to changing environments. In this paper we describe an example scenario involving a swarm of drones from a military coalition and civil/humanitarian organisations that are working collaboratively to monitor areas at risk of flooding. We provide a definition of a swarm and how they can operate by exchanging messages. We define a flexible set of policies that are applicable to our scenario that can be easily extended to other scenarios or policy paradigms. These policies ensure that the swarms of drones behave as expected (e.g., for safety and security). Finally we discuss the challenges and limitations around policies for autonomous swarms and how new research, such as generative policies, can aid in solving these limitations.

Conference paper

Karafili E, Sgandurra D, Lupu E, 2018, A logic-based reasoner for discovering authentication vulnerabilities between interconnected accounts, 1st International Workshop on Emerging Technologies for Authorization and Authentication, Publisher: Springer Verlag, Pages: 73-87, ISSN: 0302-9743

With users being more reliant on online services for their daily activities, there is an increasing risk for them to be threatened by cyber-attacks harvesting their personal information or banking details. These attacks are often facilitated by the strong interconnectivity that exists between online accounts, in particular due to the presence of shared (e.g., replicated) pieces of user information across different accounts. In addition, a significant proportion of users employs pieces of information, e.g. used to recover access to an account, that are easily obtainable from their social networks accounts, and hence are vulnerable to correlation attacks, where a malicious attacker is either able to perform password reset attacks or take full control of user accounts.This paper proposes the use of verification techniques to analyse the possible vulnerabilities that arises from shared pieces of information among interconnected online accounts. Our primary contributions include a logic-based reasoner that is able to discover vulnerable online accounts, and a corresponding tool that provides modelling of user ac- counts, their interconnections, and vulnerabilities. Finally, the tool allows users to perform security checks of their online accounts and suggests possible countermeasures to reduce the risk of compromise.

Conference paper

Karafili E, Wang L, Kakas A, Lupu Eet al., 2018, Helping forensic analysts to attribute cyber-attacks: an argumentation-based reasoner, International Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2018), Publisher: Springer Verlag, Pages: 510-518, ISSN: 0302-9743

Discovering who performed a cyber-attack or from where it originated is essential in order to determine an appropriate response and future risk mitigation measures. In this work, we propose a novel argumentation-based reasoner for analyzing and attributing cyber-attacks that combines both technical and social evidence. Our reasoner helps the digital forensics analyst during the analysis of the forensic evidence by providing to the analyst the possible culprits of the attack, new derived evidence, hints about missing evidence, and insights about other paths of investigation. The proposed reasoner is flexible, deals with conflicting and incomplete evidence,and was tested on real cyber-attacks cases.

Conference paper

Arunkumar S, Pipes S, Makaya C, Bertino E, Karafili E, Lupu E, Williams Cet al., 2018, Next generation firewalls for dynamic coalitions, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

Firewalls represent a critical security building block for networks as they monitor and control incoming and outgoing network traffic based on the enforcement of predetermined secu- rity rules, referred to as firewall rules. Firewalls are constantly being improved to enhance network security. From being a simple filtering device, firewall has been evolved to operate in conjunc- tion in intrusion detection and prevention systems. This paper reviews the existing firewall policies and assesses their application in highly dynamic networks such as coalitions networks. The paper also describe the need for the next-generation firewall policies and how the generative policy model can be leveraged.

Conference paper

Karafili E, Pipes S, Lupu E, 2018, Verification techniques for policy based systems, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

Verification techniques are applied to policy based systems to ensure design correctness and to aid in the discovery of errors at an early stage of the development life cycle. A primary goal of policy verification is to evaluate the policy’s validity. Other analyses on policy based systems include the identification of conflicting policies and policy efficiency evalu- ation and improvement. In this work, we present a discussion and classification of recent research on verification techniques for policy based systems. We analyse several techniques and identify popular supporting verification tools. An evaluation of the benefits and drawbacks of the existing policy analyses is made. Some of the common identified problems were the significant need of computational power, the limitation of the techniques to particular policy model, which restrict their ex- tension to other policy models and the lack of efficient conflicts resolution methods. We use the evaluation results for discussing the further challenges and future research directions that will be faced by policy verification techniques. In particular, we discuss specific requirements concerning verification techniques for coalition policies systems and autonomous decision making.

Conference paper

Felmlee D, Lupu E, McMillan C, Karafili E, Bertino Eet al., 2018, Decision-making in policy governed human-autonomous systems teams, DAIS Workshop, 2017 IEEE SmartWorld Congress, Publisher: IEEE

Policies govern choices in the behavior of systems. They are applied to human behavior as well as to the behavior of autonomous systems but are defined differently in each case. Generally humans have the ability to interpret the intent behind the policies, to bring about their desired effects, even occasionally violating them when the need arises. In contrast, policies for automated systems fully define the prescribed behavior without ambiguity, conflicts or omissions. The increasing use of AI techniques and machine learning in autonomous systems such as drones promises to blur these boundaries and allows us to conceive in a similar way more flexible policies for the spectrum of human-autonomous systems collaborations. In coalition environments this spectrum extends across the boundaries of authority in pursuit of a common coalition goal and covers collaborations between human and autonomous systems alike.In social sciences, social exchange theory has been applied successfully to explain human behavior in a variety of contexts. It provides a framework linking the expected rewards, costs, satisfaction and commitment to explain and anticipate the choices that individuals make when confronted with various options. We discuss here how it can be used within coalition environments to explain joint decision making and to help formulate policies re-framing the concepts where appropriate. Social exchange theory is particularly attractive within this context as it provides a theory with “measurable” components that can be readily integrated in machine reasoning processes.

Conference paper

Spanaki K, Karafili E, Lupu E, 2018, Sharing agreements and quality attributes in data manufacturing, EurOMA 2018

Conference paper

Calo S, Verma D, Chakraborty S, Bertino E, Lupu E, Cirincione Get al., 2018, Self-Generation of Access Control Policies, 23rd ACM Symposium on Access Control Models and Technologies (SACMAT), Publisher: ASSOC COMPUTING MACHINERY, Pages: 39-47

Conference paper

Rashid A, Danezis G, Chivers H, Lupu E, Martin A, Lewis M, Peersman Cet al., 2018, Scoping the Cyber Security Body of Knowledge, IEEE SECURITY & PRIVACY, Vol: 16, Pages: 96-102, ISSN: 1540-7993

Journal article

Steiner RV, Barrère M, Lupu E, 2018, WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures, Living in the Internet of Things: Cybersecurity of the IoT - 2018

We propose a model to represent the health of WSNs that allows us to evaluate a network’s ability to execute its functions. Central to this model is how we quantify the importance of each network node. As we focus on the availability of the network data, we investigate how well different centrality measures identify the significance of each node for the network connectivity. In this process, we propose a new metric named current-flow sink betweenness. Through a number of experiments, we demonstrate that while no metric is invariably better in identifying sensors’ connectivity relevance, the proposed current-flow sink betweenness outperforms existing metrics in the vast majority of cases.

Conference paper

Turner HCM, Chizari H, Lupu E, 2018, Step intervals and arterial pressure in PVS schemes, Living in the Internet of Things: Cybersecurity of the IoT - 2018, Publisher: Institution of Engineering and Technology, Pages: 36-45

We build upon the idea of Physiological Value Based Security schemes as a means of securing body sensor networks (BSN). Such schemes provide a secure means for sensors in a BSN to communicate with one another, as long as they can measure the same underlying physiological signal. This avoids the use of pre-distributed keys and allows re-keying to be done easily. Such techniques require identifying signals and encoding methods that can be used in the scheme. Hence we first evaluate step interval as our physiological signal, using existing modular encoding method and our proposed learned partitioning function as the encoding methods. We show that both of these are usable with the scheme and identify a suitable parametrisation. We then go on to evaluate arterial blood pressure using our proposed learned mean FFT coefficients method. We demonstrate that with the correct parameters this could also be used in the scheme. This further improves the usability of PVS schemes, by identify two more signals that could be used, as well as two encoding methods that may also be useful for other signals.

Conference paper

Chizari H, Lupu E, Thomas P, 2018, Randomness of physiological signals in generation cryptographic key for secure communication between implantable medical devices inside the body and the outside world, Living in the Internet of Things: Cybersecurity of the IoT - 2018, Publisher: Institution of Engineering and Technology

A physiological signal must have a certain level of randomness inside it to be a good source of randomness for generating cryptographic key. Dependency to the history is one of the measures to examine the strength of a randomness source. In dependency to the history, the adversary has infinite access to the history of generated random bits from the source and wants to predict the next random number based on that. Although many physiological signals have been proposed in literature as good source of randomness, no dependency to history analysis has been carried out to examine this fact. In this paper, using a large dataset of physiological signals collected from PhysioNet, the dependency to history of Interpuls Interval (IPI), QRS Complex, and EEG signals (including Alpha, Beta, Delta, Gamma and Theta waves) were examined. The results showed that despite the general assumption that the physiological signals are random, all of them are weak sources of randomness with high dependency to their history. Among them, Alpha wave of EEG signal shows a much better randomness and is a good candidate for post-processing and randomness extraction algorithm.

Conference paper

Taylor P, Allpress S, Carr M, Lupu E, Norton J, Smith L, Blackstock J, Boyes H, Hudson-Smith A, Brass I, Chizari H, Cooper R, Coultron P, Craggs B, Davies N, De Roure D, Elsden M, Huth M, Lindley J, Maple C, Mittelstadt B, Nicolescu R, Nurse J, Procter R, Radanliev P, Rashid A, Sgandurra D, Skatova A, Taddeo M, Tanczer L, Vieira-Steiner R, Watson JDM, Wachter S, Wakenshaw S, Carvalho G, Thompson RJ, Westbury PSet al., 2018, Internet of Things: Realising the Potential of a Trusted Smart World, Internet of Things: Realising the Potential of a Trusted Smart World, London, Publisher: Royal Academy of Engineering: London

This report examines the policy challenges for the Internet of Things (IoT), and raises a broad range of issues that need to be considered if policy is to be effective and the potential economic value of IoT is harnessed. It builds on the Blackett review, The Internet of Things: making the most of the second digital revolution, adding detailed knowledge based on research from the PETRAS Cybersecurity of the Internet of Things Research Hub and input from Fellows of the Royal Academy of Engineering. The report targets government policymakers, regulators, standards bodies and national funding bodies, and will also be of interest to suppliers and adopters of IoT products and services.

Report

Munoz Gonzalez L, Lupu E, 2018, The secret of machine learning, ITNOW, Vol: 60, Pages: 38-39, ISSN: 1746-5702

Luis Muñoz-González and Emil C. Lupu, from Imperial College London, explore the vulnerabilities of machine learning algorithms.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00155030&limit=30&person=true