236 results found
Karafili E, Wang L, Lupu E, An argumentation-based reasoner to assist digital investigation and attribution of cyber-attacks, DFRWS EU, Publisher: Elsevier, ISSN: 1742-2876
We expect an increase in the frequency and severity of cyber-attacks that comes along with the need for efficient security coun- termeasures. The process of attributing a cyber-attack helps to construct efficient and targeted mitigating and preventive security measures. In this work, we propose an argumentation-based reasoner (ABR) as a proof-of-concept tool that can help a forensics analyst during the analysis of forensic evidence and the attribution process. Given the evidence collected from a cyber-attack, our reasoner can assist the analyst during the investigation process, by helping him/her to analyze the evidence and identify who per- formed the attack. Furthermore, it suggests to the analyst where to focus further analyses by giving hints of the missing evidence or new investigation paths to follow. ABR is the first automatic reasoner that can combine both technical and social evidence in the analysis of a cyber-attack, and that can also cope with incomplete and conflicting information. To illustrate how ABR can assist in the analysis and attribution of cyber-attacks we have used examples of cyber-attacks and their analyses as reported in publicly available reports and online literature. We do not mean to either agree or disagree with the analyses presented therein or reach attribution conclusions.
Muñoz-González L, Lupu EC, 2019, The security of machine learning systems, AI in Cybersecurity, Publisher: Springer, Pages: 47-79
© Springer Nature Switzerland AG 2019. Machine learning lies at the core of many modern applications, extracting valuable information from data acquired from numerous sources. It has produced a disruptive change in society, providing new functionality, improved quality of life for users, e.g., through personalization, optimized use of resources, and the automation of many processes. However, machine learning systems can themselves be the targets of attackers, who might gain a significant advantage by exploiting the vulnerabilities of learning algorithms. Such attacks have already been reported in the wild in different application domains. This chapter describes the mechanisms that allow attackers to compromise machine learning systems by injecting malicious data or exploiting the algorithms’ weaknesses and blind spots. Furthermore, mechanisms that can help mitigate the effect of such attacks are also explained, along with the challenges of designing more secure machine learning systems.
Soikkeli J, Muñoz-González L, Lupu E, 2019, Efficient attack countermeasure selection accounting for recovery and action costs, the 14th International Conference, Publisher: ACM Press
The losses arising from a system being hit by cyber attacks can be staggeringly high, but defending against such attacks can also be costly. This work proposes an attack countermeasure selection approach based on cost impact analysis that takes into account the impacts of actions by both the attacker and the defender.We consider a networked system providing services whose functionality depends on other components in the network. We model the costs and losses to service availability from compromises and defensive actions to the components, and show that while containment of the attack can be an effective defense, it may be more cost-efficient to allow parts of the attack to continue further whilst focusing on recovering services to a functional state. Based on this insight, we build a countermeasure selection method that chooses the most cost-effective action based on its impact on expected losses and costs over a given time horizon. Our method is evaluated using simulations in synthetic graphs representing network dependencies and vulnerabilities, and performs well in comparison to alternatives.
Hau Z, Lupu EC, 2019, Exploiting correlations to detect false data injections in low-density wireless sensor networks, Cyber-Physical System Security Workshop, Publisher: ACM Press
We propose a novel framework to detect false data injections in a low-density sensor environment with heterogeneous sensor data. The proposed detection algorithm learns how each sensor's data correlates within the sensor network, and false data is identified by exploiting the anomalies in these correlations. When a large number of sensors measuring homogeneous data are deployed, data correlations in space at a fixed snapshot in time could be used as as basis to detect anomalies. Exploiting disruptions in correlations when false data is injected has been used in a high-density sensor setting and proven to be effective. With increasing adoption of sensor deployments in low-density setting, there is a need to develop detection techniques for these applications. However, with constraints on the number of sensors and different data types, we propose the use of temporal correlations across the heterogeneous data to determine the authenticity of the reported data. We also provide an adversarial model that utilizes a graphical method to devise complex attack strategies where an attacker injects coherent false data in multiple sensors to provide a false representation of the physical state of the system with the aim of subverting detection. This allows us to test the detection algorithm and assess its performance in improving the resilience of the sensor network against data integrity attacks.
Co KT, Munoz Gonzalez L, de Maupeou S, et al., Procedural Noise Adversarial Examples for Black-Box Attacks on Deep Neural Networks, 26th ACM Conference on Computer and Communications Security, Publisher: ACM
Deep Convolutional Networks (DCNs) have been shown to be vulnerable to adversarial examples—perturbed inputs specifically designed to produce intentional errors in the learning algorithms attest time. Existing input-agnostic adversarial perturbations exhibit interesting visual patterns that are currently unexplained. In this paper, we introduce a structured approach for generating Universal Adversarial Perturbations (UAPs) with procedural noise functions. Our approach unveils the systemic vulnerability of popular DCN models like Inception v3 and YOLO v3, with single noise patterns able to fool a model on up to 90% of the dataset. Procedural noise allows us to generate a distribution of UAPs with high universal evasion rates using only a few parameters. Additionally, we propose Bayesian optimization to efficiently learn procedural noise parameters to construct inexpensive untargeted black-box attacks. We demonstrate that it can achieve an average of less than 10 queries per successful attack, a 100-fold improvement on existing methods. We further motivate the use of input-agnostic defences to increase the stability of models to adversarial perturbations. The universality of our attacks suggests that DCN models may be sensitive to aggregations of low-level class-agnostic features. These findings give insight on the nature of some universal adversarial perturbations and how they could be generated in other applications.
Chizari H, Lupu EC, 2019, Extracting randomness from the trend of IPI for cryptographic operators in implantable medical devices, IEEE Transactions on Dependable and Secure Computing, ISSN: 1545-5971
Achieving secure communication between an Implantable Medical Device (IMD) inside the body and a gateway outside the body has showed its criticality with recent reports of hackings such as in St. Jude Medical's Implantable Cardiac Devices, Johnson and Johnson insulin pumps and vulnerabilities in brain Neuro-implants. The use of asymmetric cryptography in particular is not a practical solution for IMDs due to the scarce computational and power resources, symmetric key cryptography is preferred. One of the factors in security of a symmetric cryptographic system is to use a strong key for encryption. A solution to develop such a strong key without using extensive resources in an IMD, is to extract it from the body physiological signals. In order to have a strong enough key, the physiological signal must be a strong source of randomness and InterPulse Interval (IPI) has been advised to be such that. A strong randomness source should have five conditions: Universality (available on all people), Liveness (available at any-time), Robustness (strong random number), Permanence (independent from its history) and Uniqueness (independent from other sources). Nevertheless, for current proposed random extraction methods from IPI these conditions (mainly last three conditions) were not examined. In this study, firstly, we proposed a methodology to measure the last three conditions: Information secrecy measures for Robustness, Santha-Vazirani Source delta value for Permanence and random sources dependency analysis for Uniqueness. Then, using a huge dataset of IPI values (almost 900,000,000 IPIs), we showed that IPI does not have conditions of Robustness and Permanence as a randomness source. Thus, extraction of a strong uniform random number from IPI value, mathematically, is impossible. Thirdly, rather than using the value of IPI, we proposed the trend of IPI as a source for a new randomness extraction method named as Martingale Randomness Extraction from IPI (MRE-IPI). We evaluat
Spanaki K, Gürgüç Z, Mulligan C, et al., 2019, Organizational cloud security and control: a proactive approach, Information Technology and People, Vol: 32, Pages: 516-537, ISSN: 0959-3845
PurposeThe purpose of this paper is to unfold the perceptions around additional security in cloud environments by highlighting the importance of controlling mechanisms as an approach to the ethical use of the systems. The study focuses on the effects of the controlling mechanisms in maintaining an overall secure position for the cloud and the mediating role of the ethical behavior in this relationship.Design/methodology/approachA case study was conducted, examining the adoption of managed cloud security services as a means of control, as well as a large-scale survey with the views of IT decision makers about the effects of such adoption to the overall cloud security.FindingsThe findings indicate that there is indeed a positive relationship between the adoption of controlling mechanisms and the maintenance of overall cloud security, which increases when the users follow an ethical behavior in the use of the cloud. A framework based on the findings is built suggesting a research agenda for the future and a conceptualization of the field.Research limitations/implicationsOne of the major limitations of the study is the fact that the data collection was based on the perceptions of IT decision makers from a cross-section of industries; however the proposed framework should also be examined in industry-specific context. Although the firm size was indicated as a high influencing factor, it was not considered for this study, as the data collection targeted a range of organizations from various sizes.Originality/valueThis study extends the research of IS security behavior based on the notion that individuals (clients and providers of cloud infrastructure) are protecting something separate from themselves, in a cloud-based environment, sharing responsibility and trust with their peers. The organization in this context is focusing on managed security solutions as a proactive measurement to preserve cloud security in cloud environments.
Co KT, Munoz Gonzalez L, Lupu E, Sensitivity of Deep Convolutional Networks to Gabor Noise, ICML 2019 Workshop
Deep Convolutional Networks (DCNs) have been shown to be sensitive to Universal Adversarial Perturbations (UAPs): input-agnostic perturbations that fool a model on large portions of a dataset. These UAPs exhibit interesting visual patterns, but this phenomena is, as yet, poorly understood. Our work shows that visually similar procedural noise patterns also act as UAPs. In particular, we demonstrate that different DCN architectures are sensitive to Gabor noise patterns. This behaviour, its causes, and implications deserve further in-depth study.
Collinge G, Lupu E, Munoz Gonzalez L, 2019, Defending against Poisoning Attacks in Online Learning Settings, European Symposium on Artificial Neural Networks, Publisher: ESANN
Machine learning systems are vulnerable to data poisoning, acoordinated attack where a fraction of the training dataset is manipulatedby an attacker to subvert learning. In this paper we first formulate an optimal attack strategy against online learning classifiers to assess worst-casescenarios. We also propose two defence mechanisms to mitigate the effectof online poisoning attacks by analysing the impact of the data points inthe classifier and by means of an adaptive combination of machine learning classifiers with different learning rates. Our experimental evaluationsupports the usefulness of our proposed defences to mitigate the effect ofpoisoning attacks in online learning settings.
Munoz Gonzalez L, Sgandurra D, Barrere Cambrun M, et al., 2019, Exact Inference Techniques for the Analysis of Bayesian Attack Graphs, IEEE Transactions on Dependable and Secure Computing, Vol: 16, Pages: 231-244, ISSN: 1941-0018
Attack graphs are a powerful tool for security risk assessment by analysing network vulnerabilities and the paths attackers can use to compromise network resources. The uncertainty about the attacker's behaviour makes Bayesian networks suitable to model attack graphs to perform static and dynamic analysis. Previous approaches have focused on the formalization of attack graphs into a Bayesian model rather than proposing mechanisms for their analysis. In this paper we propose to use efficient algorithms to make exact inference in Bayesian attack graphs, enabling the static and dynamic network risk assessments. To support the validity of our approach we have performed an extensive experimental evaluation on synthetic Bayesian attack graphs with different topologies, showing the computational advantages in terms of time and memory use of the proposed techniques when compared to existing approaches.
Paudice A, Muñoz-González L, Lupu EC, 2019, Label sanitization against label flipping poisoning attacks, Nemesis'18. Workshop in Recent Advances in Adversarial Machine Learning, Publisher: Springer Verlag, Pages: 5-15, ISSN: 0302-9743
Many machine learning systems rely on data collected in thewild from untrusted sources, exposing the learning algorithms to datapoisoning. Attackers can inject malicious data in the training datasetto subvert the learning process, compromising the performance of thealgorithm producing errors in a targeted or an indiscriminate way. Labelflipping attacks are a special case of data poisoning, where the attackercan control the labels assigned to a fraction of the training points. Evenif the capabilities of the attacker are constrained, these attacks havebeen shown to be effective to significantly degrade the performance ofthe system. In this paper we propose an efficient algorithm to performoptimal label flipping poisoning attacks and a mechanism to detect andrelabel suspicious data points, mitigating the effect of such poisoningattacks.
Steiner RV, Lupu E, 2019, Towards more practical software-based attestation, Computer Networks, Vol: 149, Pages: 43-55, ISSN: 1389-1286
Software-based attestation promises to enable the integrity verification of untrusted devices without requiring any particular hardware. However, existing proposals rely on strong assumptions that hinder their deployment and might even weaken their security. One of such assumptions is that using the maximum known network round-trip time to define the attestation timeout allows all honest devices to reply in time. While this is normally true in controlled environments, it is generally false in real deployments and especially so in a scenario like the Internet of Things where numerous devices communicate over an intrinsically unreliable wireless medium. Moreover, a larger timeout demands more computations, consuming extra time and energy and restraining the untrusted device from performing its main tasks. In this paper, we review this fundamental and yet overlooked assumption and propose a novel stochastic approach that significantly improves the overall attestation performance. Our experimental evaluation with IoT devices communicating over real-world uncontrolled Wi-Fi networks demonstrates the practicality and superior performance of our approach that in comparison with the current state of the art solution reduces the total attestation time and energy consumption around seven times for honest devices and two times for malicious ones, while improving the detection rate of honest devices (8% higher TPR) without compromising security (0% FPR).
Karafili E, Spanaki K, Lupu E, 2019, Access Control and Quality Attributes of Open Data: Applications and Techniques, Workshop on Quality of Open Data, Publisher: Springer Verlag (Germany), Pages: 603-614, ISSN: 1865-1348
Open Datasets provide one of the most popular ways to ac- quire insight and information about individuals, organizations and multiple streams of knowledge. Exploring Open Datasets by applying comprehensive and rigorous techniques for data processing can provide the ground for innovation and value for everyone if the data are handled in a legal and controlled way. In our study, we propose an argumentation and abductive reasoning approach for data processing which is based on the data quality background. Explicitly, we draw on the literature of data management and quality for the attributes of the data, and we extend this background through the development of our techniques. Our aim is to provide herein a brief overview of the data quality aspects, as well as indicative applications and examples of our approach. Our overall objective is to bring serious intent and propose a structured way for access control and processing of open data with a focus on the data quality aspects.
Munoz Gonzalez L, Lupu E, 2019, The Security of Machine Learning Systems, AI in Cybersecurity, Editors: Sikos
Karafili E, Sgandurra D, Lupu E, 2018, A logic-based reasoner for discovering authentication vulnerabilities between interconnected accounts, 1st International Workshop on Emerging Technologies for Authorization and Authentication, Publisher: Springer Verlag, Pages: 73-87, ISSN: 0302-9743
With users being more reliant on online services for their daily activities, there is an increasing risk for them to be threatened by cyber-attacks harvesting their personal information or banking details. These attacks are often facilitated by the strong interconnectivity that exists between online accounts, in particular due to the presence of shared (e.g., replicated) pieces of user information across different accounts. In addition, a significant proportion of users employs pieces of information, e.g. used to recover access to an account, that are easily obtainable from their social networks accounts, and hence are vulnerable to correlation attacks, where a malicious attacker is either able to perform password reset attacks or take full control of user accounts.This paper proposes the use of verification techniques to analyse the possible vulnerabilities that arises from shared pieces of information among interconnected online accounts. Our primary contributions include a logic-based reasoner that is able to discover vulnerable online accounts, and a corresponding tool that provides modelling of user ac- counts, their interconnections, and vulnerabilities. Finally, the tool allows users to perform security checks of their online accounts and suggests possible countermeasures to reduce the risk of compromise.
Cullen A, Karafili E, Pilgrim A, et al., Policy support for autonomous swarms of drones, 1st International Workshop on Emerging Technologies for Authorization and Authentication, Publisher: Springer Verlag, ISSN: 0302-9743
In recent years drones have become more widely used in military and non-military applications. Automation of these drones will become more important as their use increases. Individual drones acting autonomously will be able to achieve some tasks, but swarms of autonomous drones working together will be able to achieve much more complex tasks and be able to better adapt to changing environments. In this paper we describe an example scenario involving a swarm of drones from a military coalition and civil/humanitarian organisations that are working collaboratively to monitor areas at risk of flooding. We provide a definition of a swarm and how they can operate by exchanging messages. We define a flexible set of policies that are applicable to our scenario that can be easily extended to other scenarios or policy paradigms. These policies ensure that the swarms of drones behave as expected (e.g., for safety and security). Finally we discuss the challenges and limitations around policies for autonomous swarms and how new research, such as generative policies, can aid in solving these limitations.
Karafili E, Wang L, Kakas A, et al., 2018, Helping forensic analysts to attribute cyber-attacks: an argumentation-based reasoner, International Conference on Principles and Practice of Multi-Agent Systems (PRIMA 2018), Publisher: Springer Verlag, Pages: 510-518, ISSN: 0302-9743
Discovering who performed a cyber-attack or from where it originated is essential in order to determine an appropriate response and future risk mitigation measures. In this work, we propose a novel argumentation-based reasoner for analyzing and attributing cyber-attacks that combines both technical and social evidence. Our reasoner helps the digital forensics analyst during the analysis of the forensic evidence by providing to the analyst the possible culprits of the attack, new derived evidence, hints about missing evidence, and insights about other paths of investigation. The proposed reasoner is flexible, deals with conflicting and incomplete evidence,and was tested on real cyber-attacks cases.
Firewalls represent a critical security building block for networks as they monitor and control incoming and outgoing network traffic based on the enforcement of predetermined secu- rity rules, referred to as firewall rules. Firewalls are constantly being improved to enhance network security. From being a simple filtering device, firewall has been evolved to operate in conjunc- tion in intrusion detection and prevention systems. This paper reviews the existing firewall policies and assesses their application in highly dynamic networks such as coalitions networks. The paper also describe the need for the next-generation firewall policies and how the generative policy model can be leveraged.
Spanaki K, Karafili E, Lupu E, 2018, Sharing agreements and quality attributes in data manufacturing, EurOMA 2018
Calo S, Verma D, Chakraborty S, et al., 2018, Self-Generation of Access Control Policies, 23rd ACM Symposium on Access Control Models and Technologies (SACMAT), Publisher: ASSOC COMPUTING MACHINERY, Pages: 39-47
Steiner RV, Barrère M, Lupu E, 2018, WSNs Under Attack! How Bad Is It? Evaluating Connectivity Impact Using Centrality Measures, Living in the Internet of Things: Cybersecurity of the IoT - 2018
We propose a model to represent the health of WSNs that allows us to evaluate a network’s ability to execute its functions. Central to this model is how we quantify the importance of each network node. As we focus on the availability of the network data, we investigate how well different centrality measures identify the significance of each node for the network connectivity. In this process, we propose a new metric named current-flow sink betweenness. Through a number of experiments, we demonstrate that while no metric is invariably better in identifying sensors’ connectivity relevance, the proposed current-flow sink betweenness outperforms existing metrics in the vast majority of cases.
Chizari H, Lupu E, Thomas P, 2018, Randomness of physiological signals in generation cryptographic key for secure communication between implantable medical devices inside the body and the outside world, Living in the Internet of Things: Cybersecurity of the IoT - 2018, Publisher: Institution of Engineering and Technology
A physiological signal must have a certain level of randomness inside it to be a good source of randomness for generating cryptographic key. Dependency to the history is one of the measures to examine the strength of a randomness source. In dependency to the history, the adversary has infinite access to the history of generated random bits from the source and wants to predict the next random number based on that. Although many physiological signals have been proposed in literature as good source of randomness, no dependency to history analysis has been carried out to examine this fact. In this paper, using a large dataset of physiological signals collected from PhysioNet, the dependency to history of Interpuls Interval (IPI), QRS Complex, and EEG signals (including Alpha, Beta, Delta, Gamma and Theta waves) were examined. The results showed that despite the general assumption that the physiological signals are random, all of them are weak sources of randomness with high dependency to their history. Among them, Alpha wave of EEG signal shows a much better randomness and is a good candidate for post-processing and randomness extraction algorithm.
Turner HCM, Chizari H, Lupu E, 2018, Step intervals and arterial pressure in PVS schemes, Living in the Internet of Things: Cybersecurity of the IoT - 2018, Publisher: Institution of Engineering and Technology, Pages: 36-45
We build upon the idea of Physiological Value Based Security schemes as a means of securing body sensor networks (BSN). Such schemes provide a secure means for sensors in a BSN to communicate with one another, as long as they can measure the same underlying physiological signal. This avoids the use of pre-distributed keys and allows re-keying to be done easily. Such techniques require identifying signals and encoding methods that can be used in the scheme. Hence we first evaluate step interval as our physiological signal, using existing modular encoding method and our proposed learned partitioning function as the encoding methods. We show that both of these are usable with the scheme and identify a suitable parametrisation. We then go on to evaluate arterial blood pressure using our proposed learned mean FFT coefficients method. We demonstrate that with the correct parameters this could also be used in the scheme. This further improves the usability of PVS schemes, by identify two more signals that could be used, as well as two encoding methods that may also be useful for other signals.
Taylor P, Allpress S, Carr M, et al., 2018, Internet of Things: Realising the Potential of a Trusted Smart World, Internet of Things: Realising the Potential of a Trusted Smart World, London, Publisher: Royal Academy of Engineering: London
This report examines the policy challenges for the Internet of Things (IoT), and raises a broad range of issues that need to be considered if policy is to be effective and the potential economic value of IoT is harnessed. It builds on the Blackett review, The Internet of Things: making the most of the second digital revolution, adding detailed knowledge based on research from the PETRAS Cybersecurity of the Internet of Things Research Hub and input from Fellows of the Royal Academy of Engineering. The report targets government policymakers, regulators, standards bodies and national funding bodies, and will also be of interest to suppliers and adopters of IoT products and services.
Munoz Gonzalez L, Lupu E, 2018, The secret of machine learning, ITNOW, Vol: 60, Pages: 38-39, ISSN: 1746-5702
Luis Muñoz-González and Emil C. Lupu, from Imperial College London, explore the vulnerabilities of machine learning algorithms.
Illiano V, Lupu E, Muñoz-González L, et al., 2018, Determining Resilience Gains from Anomaly Detection for Event Integrity in Wireless Sensor Networks, ACM Transactions on Sensor Networks, Vol: 14, ISSN: 1550-4859
Measurements collected in a wireless sensor network (WSN) can be maliciously compromised through several attacks, but anomaly detection algorithms may provide resilience by detecting inconsistencies in the data. Anomaly detection can identify severe threats to WSN applications, provided that there is a sufficient amount of genuine information. This article presents a novel method to calculate an assurance measure for the network by estimating the maximum number of malicious measurements that can be tolerated. In previous work, the resilience of anomaly detection to malicious measurements has been tested only against arbitrary attacks, which are not necessarily sophisticated. The novel method presented here is based on an optimization algorithm, which maximizes the attack’s chance of staying undetected while causing damage to the application, thus seeking the worst-case scenario for the anomaly detection algorithm. The algorithm is tested on a wildfire monitoring WSN to estimate the benefits of anomaly detection on the system’s resilience. The algorithm also returns the measurements that the attacker needs to synthesize, which are studied to highlight the weak spots of anomaly detection. Finally, this article presents a novel methodology that takes in input the degree of resilience required and automatically designs the deployment that satisfies such a requirement.
Calo S, Lupu E, Bertino E, et al., 2018, Research Challenges in Dynamic Policy-Based Autonomous Security, IEEE International Conference on Big Data (IEEE Big Data), Publisher: IEEE, Pages: 2970-2973
Barrere Cambrun M, Vieira Steiner R, Mohsen R, et al., Tracking the bad guys: an efficient forensic methodology to trace multi-step attacks using core attack graphs, 13th International Conference on Network and Service Management (CNSM'17), Publisher: IEEE, ISSN: 2165-963X
In this paper, we describe an efficient methodology to guide investigators during network forensic analysis. To this end, we introduce the concept of core attack graph, a compact representation of the main routes an attacker can take towards specific network targets. Such compactness allows forensic investigators to focus their efforts on critical nodes that are more likely to be part of attack paths, thus reducing the overall number of nodes (devices, network privileges) that need to be examined. Nevertheless, core graphs also allow investigators to hierarchically explore the graph in order to retrieve different levels of summarised information. We have evaluated our approach over different network topologies varying parameters such as network size, density, and forensic evaluation threshold. Our results demonstrate that we can achieve the same level of accuracy provided by standard logical attack graphs while significantly reducing the exploration rate of the network.
Karafili E, Lupu E, Cullen A, et al., 2018, Improving data sharing in data rich environments, 1st IEEE Big Data International Workshop on Policy-based Autonomic Data Governance, IEEE BigData, Publisher: IEEE
The increasing use of big data comes along with the problem of ensuring correct and secure data access. There is a need to maximise the data dissemination whilst controlling their access. Depending on the type of users different qualities and parts of data are shared. We introduce an alteration mechanism, more precisely a restriction one, based on a policy analysis language. The alteration reflects the level of trust and relations the users have, and are represented as policies inside the data sharing agreements. These agreements are attached to the data and are enforced every time the data are accessed, used or shared. We show the use of our alteration mechanism with a military use case, where different parties are involved during the missions, and they have different relations of trust and partnership.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.