185 results found
Soh H, Demiris Y, 2013, When and how to help: An iterative probabilistic model for learning assistance by demonstration, International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 3230-3236, ISSN: 2153-0858
Crafting a proper assistance policy is a difficult endeavour but essential for the development of robotic assistants. Indeed, assistance is a complex issue that depends not only on the task-at-hand, but also on the state of the user, environment and competing objectives. As a way forward, this paper proposes learning the task of assistance through observation; an approach we term Learning Assistance by Demonstration (LAD). Our methodology is a subclass of Learning-by-Demonstration (LbD), yet directly addresses difficult issues associated with proper assistance such as when and how to appropriately assist. To learn assistive policies, we develop a probabilistic model that explicitly captures these elements and provide efficient, online, training methods. Experimental results on smart mobility assistance — using both simulation and a real-world smart wheelchair platform — demonstrate the effectiveness of our approach; the LAD model quickly learns when to assist (achieving an AUC score of 0.95 after only one demonstration) and improves with additional examples. Results show that this translates into better task-performance; our LAD-enabled smart wheelchair improved participant driving performance (measured in lap seconds) by 20.6s (a speedup of 137%), after a single teacher demonstration.
Korkinof D, Demiris Y, 2013, Online Quantum Mixture Regression for Trajectory Learning by Demonstration, IROS 2013, Publisher: IEEE, Pages: 3222-3229
In this work, we present the online Quantum Mixture Model (oQMM), which combines the merits of quan- tum mechanics and stochastic optimization. More specifically it allows for quantum effects on the mixture states, which in turn become a superposition of conventional mixture states. We propose an efficient stochastic online learning algorithm based on the online Expectation Maximization (EM), as well as a generation and decay scheme for model components. Our method is suitable for complex robotic applications, where data is abundant or where we wish to iteratively refine our model and conduct predictions during the course of learning. With a synthetic example, we show that the algorithm can achieve higher numerical stability. We also empirically demonstrate the efficacy of our method in well-known regression benchmark datasets. Under a trajectory Learning by Demonstration setting we employ a multi-shot learning application in joint angle space, where we observe higher quality of learning and reproduction. We compare against popular and well-established methods, widely adopted across the robotics community.
Ros R, Demiris Y, 2013, Creative Dance: An Approach for Social Interaction between Robots and Children, 4th International Workshop on Human Behavior Understanding (HBU), Publisher: Springer, Pages: 40-51, ISSN: 0302-9743
In this paper we discuss the potential of using a dance robot tutor with children in the context of creative dance to study child-robot interaction through several encounters. We have taken part of dance sessions in order to extract strategies and models to inspire and justify the design of a robot dance tutor. Moreover, we present implementation details and preliminary results on a pilot study to extract initial feedback to further improve and test our system with a broader children population.
Ognibene D, Chinellato E, Sarabia M, et al., 2013, Contextual action recognition and target localization with an active allocation of attention on a humanoid robot, Bioinspiration & Biomimetics, Vol: 8
Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. Inspired by the cognitive mechanisms underlying human social behaviour, we have designed and implemented a system for a dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task exploiting its own action execution predictions. Our humanoid robot is able, during the observation of a partner's reaching movement, to contextually estimate the goal position of the partner's hand and the location in space of the candidate targets. This is done while actively gazing around the environment, with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control, based on the internal simulation of actions, provides a relevant advantage with respect to other action perception approaches, both in terms of estimation precision and of time required to recognize an action. Moreover, our model reproduces and extends some experimental results on human attention during an action perception.
Ognibene D, Demiris Y, 2013, Towards Active Event Recognition, International Joint Conference on Artificial Intelligence (IJCAI), Publisher: AIII Press, Pages: 2495-2501
Directing robot attention to recognise activities and to anticipate events like goal-directed actions is a crucial skill for human-robot interaction. Unfortunately, issues like intrinsic time constraints, the spatially distributed nature of the entailed information sources, and the existence of a multitude of unobservable states affecting the system, like latent intentions, have long rendered achievement of such skills a rather elusive goal. The problem tests the limits of current attention control systems. It requires an integrated solution for tracking, exploration and recognition, which traditionally have been seen as separate problems in active vision.We propose a probabilistic generative framework based on a mixture of Kalman filters and information gain maximisation that uses predictions in both recognition and attention-control. This framework can efficiently use the observations of one element in a dynamic environment to provide information on other elements, and consequently enables guided exploration.Interestingly, the sensors-control policy, directly derived from first principles, represents the intuitive trade-off between finding the most discriminative clues and maintaining overall awareness.Experiments on a simulated humanoid robot observing a human executing goal-oriented actions demonstrated improvement on recognition time and precision over baseline systems.
Chatzis S, Demiris Y, 2013, The Infinite-Order Conditional Random Field Model for Sequential Data Modeling, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol: 6, Pages: 1523-1534, ISSN: 0162-8828
Sequential data labeling is a fundamental task in machine learning applications, with speech and natural language processing, activity recognition in video sequences, and biomedical data analysis being characteristic examples, to name just a few. The conditional random field (CRF), a log-linear model representing the conditional distribution of the observation labels, is one of the most successful approaches for sequential data labeling and classification, and has lately received significant attention in machine learning as it achieves superb prediction performance in a variety of scenarios. Nevertheless, existing CRF formulations can capture only one- or few-timestep interactions and neglect higher order dependences, which are potentially useful in many real-life sequential data modeling applications. To resolve these issues, in this paper we introduce a novel CRF formulation, based on the postulation of an energy function which entails infinitely long time-dependences between the modeled data. Building blocks of our novel approach are: 1) the sequence memoizer (SM), a recently proposed nonparametric Bayesian approach for modeling label sequences with infinitely long time dependences, and 2) a mean-field-like approximation of the model marginal likelihood, which allows for the derivation of computationally efficient inference algorithms for our model. The efficacy of the so-obtained infinite-order CRF model is experimentally demonstrated.
Petit M, Lallée S, Boucher J-D, et al., 2013, The Coordinating Role of Language in Real-Time Multi-Modal Learning of Cooperative Tasks, IEEE Transactions on Autonomous Mental Development, Vol: 5, Pages: 3-17, ISSN: 1943-0604
One of the defining characteristics of human cognition is our outstanding capacity to cooperate. A central requirement for cooperation is the ability to establish a “shared plan”-which defines the interlaced actions of the two cooperating agents-in real time, and even to negotiate this shared plan during its execution. In the current research we identify the requirements for cooperation, extending our earlier work in this area. These requirements include the ability to negotiate a shared plan using spoken language, to learn new component actions within that plan, based on visual observation and kinesthetic demonstration, and finally to coordinate all of these functions in real time. We present a cognitive system that implements these requirements, and demonstrate the system's ability to allow a Nao humanoid robot to learn a nontrivial cooperative task in real-time. We further provide a concrete demonstration of how the real-time learning capability can be easily deployed on a different platform, in this case the iCub humanoid. The results are considered in the context of how the development of language in the human infant provides a powerful lever in the development of cooperative plans from lower-level sensorimotor capabilities.
Belpaeme T, Baxter PE, Read R, et al., 2013, Multimodal Child-Robot Interaction: Building Social Bonds, Journal of Human-Robot Interaction, Vol: 1, Pages: 33-53
For robots to interact effectively with human users they must be capable of coordinated, timely behavior in response to social context. The Adaptive Strategies for Sustainable Long-Term Social Interaction (ALIZ-E) project focuses on the design of long-term, adaptive social interaction between robots and child users in real-world settings. In this paper, we report on the iterative approach taken to scientific and technical developments toward this goal: advancing individual technical competen- cies and integrating them to form an autonomous robotic system for evaluation “in the wild.” The first evaluation iterations have shown the potential of this methodology in terms of adaptation of the robot to the interactant and the resulting influences on engagement. This sets the foundation for an ongoing research program that seeks to develop technologies for social robot companions.
Su Y, Wu Y, Soh H, et al., 2013, Enhanced Kinematic Model for Dexterous Manipulation with an Underactuated Hand, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 2493-2499, ISSN: 2153-0858
Sarabia M, Le Mau T, Soh H, et al., 2013, iCharibot : Design and Field Trials of a Fundraising Robot, International Conference on Social Robotics (ICSR 2013), Publisher: Springer, Pages: 412-421
In this work, we address the problem of increasing charitable donations through a novel, engaging fundraising robot: the Imperial Charity Robot (iCharibot). To better understand how to engage passers-by, we conducted a field trial in outdoor locations at a busy area in London, spread across 9 sessions of 40 minutes each. During our experiments, iCharibot attracted 679 people and engaged with 386 individuals. Our results show that interactivity led to longer user engagement with the robot. Our data further suggests both saliency and interactivity led to an increase in the total donation amount. These findings should prove useful for future design of robotic fundraisers in particular and for social robots in general.
Sarabia M, Demiris Y, 2013, A Humanoid Robot Companion for Wheelchair Users, International Conference on Social Robotics (ICSR), Publisher: Springer, Pages: 432-441
In this paper we integrate a humanoid robot with a powered wheelchair with the aim of lowering the cognitive requirements needed for powered mobility. We propose two roles for this companion: pointing out obstacles and giving directions. We show that children enjoyed driving with the humanoid companion by their side during a field-trial in an uncontrolled environment. Moreover, we present the results of a driving experiment for adults where the companion acted as a driving aid and conclude that participants preferred the humanoid companion to a simulated companion. Our results suggest that people will welcome a humanoid companion for their wheelchairs.
Chinellato E, Ognibene D, Sartori L, et al., 2013, Time to Change: Deciding When to Switch Action Plans during a Social Interaction, Publisher: Springer Berlin Heidelberg, Pages: 47-58
Ognibene D, Demiris Y, 2013, Towards active event recognition
Ognibene D, Wu Y, Lee K, et al., 2013, Hierarchies for embodied action perception, Computational and Robotic Models of the Hierarchical Organization of Behavior, Editors: Baldassarre, Mirolli, Publisher: Springer, Pages: 81-98
During social interactions, humans are capable of initiating and responding to rich and complex social actions despite having incomplete world knowledge, and physical, perceptual and computational constraints. This capability relies on action perception mechanisms that exploit regularities in observed goal-oriented behaviours to generate robust predictions and reduce the workload of sensing systems. To achieve this essential capability, we argue that the following three factors are fundamental. First, human knowledge is frequently hierarchically structured, both in the perceptual and execution domains. Second, human perception is an active process driven by current task requirements and context; this is particularly important when the perceptual input is complex (e.g. human motion) and the agent has to operate under embodiment constraints. Third, learning is at the heart of action perception mechanisms, underlying the agent’s ability to add new behaviours to its repertoire. Based on these factors, we review multiple instantiations of a hierarchically-organised biologically-inspired framework for embodied action perception, demonstrating its flexibility in addressing the rich computational contexts of action perception and learning in robotic platforms.
Su Y, Wu Y, Lee K, et al., 2012, Robust Grasping for an Under-actuated Anthropomorphic Handunder Object Position Uncertainty, Osaka, Japan, International Conference on Humanoid Robots (Humanoids), Publisher: IEEE, Pages: 719-725
This paper presents a grasp execution strategy for grasping an object with one trial when there is uncertainty in the object position. This strategy is based on three grasping components: 1) robust grasp trajectory planning which can cope with reasonable amount of initial object position error, 2) sensor-based grasp adaptation, and 3) compliant characteristics of the under actuated mechanism. This strategy is implemented and tested on the iCub humanoid robot. Two experiments and a demo of the iCub robot playing the Towers of Hanoi game are carried out to verify our system. The results demonstrate that the iCub using this approach can successfully grasp objects under certain position error with its under-actuated anthropomorphic hand.
Chatzis SP, Demiris Y, 2012, Nonparametric mixtures of Gaussian processes with power-law behavior, IEEE Transactions on Neural Networks, Vol: 23, Pages: 1862-1871, ISSN: 2162-237X
Gaussian processes (GPs) constitute one of the most important Bayesian machine learning approaches, based on a particularly effective method for placing a prior distribution overthe space of regression functions. Several researchers have considered postulating mixtures of Gaussian processes as a means ofdealing with non-stationary covariance functions, discontinuities, multi-modality, and overlapping output signals. In existing works, mixtures of Gaussian processes are based on the introduction of a gating function defined over the space of model input variables. This way, each postulated mixture component Gaussian process is effectively restricted in a limited subset of the input space. In this work, we follow a different approach: We consider a fully generative nonparametric Bayesian model with power-law behavior, generating Gaussian processes over the whole input space of the learned task. We provide an efficient algorithm for model inference, based on the variational Bayesian framework, and exhibit its efficacy using benchmark and real-world datasets.
Lee K, Kim TK, Demiris Y, 2012, Learning Action Symbols for Hierarchical Grammar Induction, Tsukuba, Japan, International Conference on Pattern Recognition (ICPR), Publisher: IEEE, Pages: 3778-3782
We present an unsupervised method of learning action symbols from video data, which self-tunes the number of symbols to effectively build hierarchical activity grammars. A video stream is given as a sequence of unlabeled segments. Similar segments are incrementally grouped to form a hierarchical tree structure. The tree is cut into clusters where each cluster is used to train an action symbol. Our goal is to find a good set of clusters i.e. symbols where regularities are best captured in the learned representation, i.e. induced grammar. Our method has two-folds: 1) Create a candidate set of symbols from initial clusters, 2) Build an activity grammar and measure model complexity and likelihood to assess the quality of the candidate set of symbols. We propose a balanced model comparison method which avoids the problem commonly found in model complexity computations where one measurement term dominates the other. Our experiments on the towers of Hanoi and human dancing videos show that our method can discover the optimal number of action symbols effectively.
Chatzis SP, Demiris Y, 2012, A Reservoir-Driven Non-Stationary Hidden Markov Model, Pattern Recognition, Vol: 45, Pages: 3985-3996
In this work, we propose a novel approach towards sequential data modeling that leverages the strengths of hidden Markov models and echo-state networks (ESNs) in the context of non-parametric Bayesian inference approaches. We introduce a non-stationary hidden Markov model, the time-dependent state transition probabilities of which are driven by a high-dimensional signal that encodes the whole history of the modeled observations, namely the state vector of a postulated observations-driven ESN reservoir. We derive an efficient inference algorithm for our model under the variational Bayesian paradigm, and we examine the efficacy of our approach considering a number of sequential data modeling applications.
Soh H, Demiris Y, 2012, Towards Early Mobility Independence: An Intelligent Paediatric Wheelchair with Case Studies, IROS Workshop on Progress, Challenges and Future Perspectives in Navigation and Manipulation Assistance for Robotic Wheelchairs
Standard powered wheelchairs are still heavily dependent on the cognitive capabilities of users. Unfortunately, this excludes disabled users who lack the required problem-solving and spatial skills, particularly young children. For these children to be denied powered mobility is a crucial set-back; exploration is important for their cognitive, emotional and psychosocial development. In this paper, we present a safer paediatric wheelchair: the Assistive Robot Transport for Youngsters (ARTY). The fundamental goal of this research is to provide a key-enabling technology to young children who would otherwise be unable to navigate independently in their environment. In addition to the technical details of our smart wheelchair, we present user-trials with able-bodied individuals as well as one 5-year-old child with special needs. ARTY promises to provide young children with "early access" to the path towards mobility independence.
Soh H, Su Y, Demiris Y, 2012, Online Spatio-Temporal Gaussian Process Experts with Application to Tactile Classification, International Conference on Intelligent Robots and Systems, IROS, Publisher: IEEE, Pages: 4489-4496, ISSN: 2153-0858
In this work, we are primarily concerned with robotic systems that learn online and continuously from multi-variate data-streams. Our first contribution is a new recursive kernel, which we have integrated into a sparse Gaussian Process to yield the Spatio-Temporal Online Recursive Kernel Gaussian Process (STORK-GP). This algorithm iteratively learns from time-series, providing both predictions and uncertainty estimates. Experiments on benchmarks demonstrate that our method achieves high accuracies relative to state-of-the-art methods. Second, we contribute an online tactile classifier which uses an array of STORK-GP experts. In contrast to existing work, our classifier is capable of learning new objects as they are presented, improving itself over time. We show that our approach yields results comparable to highly-optimised offline classification methods. Moreover, we conducted experiments with human subjects in a similar online setting with true-label feedback and present the insights gained.
Chatzis SP, Demiris Y, 2012, The echo state conditional random field model for sequential data modeling, Expert Systems With Applications, Vol: 39, Pages: 10303-10309, ISSN: 0957-4174
Sequential data labeling is a fundamental task in machine learning applications, with speech and natural language processing, activity recognition in video sequences, and biomedical data analysis being characteristic such examples, to name just a few. The conditional random field (CRF), a log-linear model representing the conditional distribution of the observation labels, is one of the most successful approaches for sequential data labeling and classification, and has lately received significant attention in machine learning, as it achieves superb prediction performance in a variety of scenarios. Nevertheless, existing CRF formulations do not account for temporal dependencies between the observed variables – they only postulate Markovian interdependencies between the predicted label variables. To resolve these issues, in this paper we propose a non-linear hierarchical CRF formulation that combines the power of echo state networks to extract high level temporal features with the graphical framework of CRF models, yielding a powerful and scalable probabilistic model that we apply to signal labeling tasks.
Chatzis SP, Korkinof D, Demiris Y, 2012, A Spatially-Constrained Normalized Gamma Process for Data Clustering, International Conference on Artificial Intelligence Applications and Innovations, AIA 2012, Publisher: Springer, Pages: 337-346
In this work, we propose a novel nonparametric Bayesian method for clustering of data with spatial interdependencies. Specifically, we devise a novel normalized Gamma process, regulated by a simplified (pointwise) Markov random field (Gibbsian) distribution with a countably infinite number of states. As a result of its construction, the proposed model allows for introducing spatial dependencies in the clustering mechanics of the normalized Gamma process, thus yielding a novel nonparametric Bayesian method for spatial data clustering. We derive an efficient truncated variational Bayesian algorithm for model inference. We examine the efficacy of our approach by considering an image segmentation application using a real-world dataset. We show that our approach outperforms related methods from the field of Bayesian nonparametrics, including the infinite hidden Markov random field model, and the Dirichlet process prior.
Ognibene D, Chinellato E, Sarabia M, et al., 2012, Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention, Living Machines, Publisher: Springer, Pages: 192-203, ISSN: 0302-9743
Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. We have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task. During the observation of a partner’s reaching movement, the robot is able to contextually estimate the goal position of the partner hand and the location in space of the candidate targets, while moving its gaze around with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control provides a relevant advantage with respect to typical passive observation, both in term of estimation precision and of time required for action recognition.
Chatzis SP, Demiris Y, 2012, A Sparse Nonparametric Hierarchical Bayesian Approach Towards Inductive Transfer for Preference Modeling, Expert Systems with Applications, Vol: 39, Pages: 7235-7246
In this paper, we present a novel methodology for preference learning based on the concept of inductive transfer. Specifically, we introduce a nonparametric hierarchical Bayesian multitask learning approach, based on the notion that human subjects may cluster together forming groups of individuals with similar preference rationale (but not identical preferences). Our approach is facilitated by the utilization of a Dirichlet process prior, which allows for the automatic inference of the most appropriate number of subject groups (clusters), as well as the employment of the automatic relevance determination (ARD) mechanism, giving rise to a sparse nature for our model, which significantly enhances its computational efficiency. We explore the efficacy of our novel approach by applying it to both a synthetic experiment and a real-world music recommendation application. As we show, our approach offers a significant enhancement in the effectiveness of knowledge transfer in statistical preference learning applications, being capable of correctly inferring the actual number of human subject groups in a modeled dataset, and limiting knowledge transfer only to subjects belonging to the same group (wherein knowledge transferability is more likely).
Chatzis SP, Korkinof D, Demiris Y, 2012, A nonparametric Bayesian approach toward robot learning by demonstration, Robotics and Autonomous Systems, Vol: 60, Pages: 789-802, ISSN: 0921-8890
In the past years, many authors have considered application of machine learning methodologies to effect robot learning by demonstration. Gaussian mixture regression (GMR) is one of the most successful methodologies used for this purpose. A major limitation of GMR models concerns automatic selection of the proper number of model states, i.e., the number of model component densities. Existing methods, including likelihood- or entropy-based criteria, usually tend to yield noisy model size estimates while imposing heavy computational requirements. Recently, Dirichlet process (infinite) mixture models have emerged in the cornerstone of nonparametric Bayesian statistics as promising candidates for clustering applications where the number of clusters is unknown a priori. Under this motivation, to resolve the aforementioned issues of GMR-based methods for robot learning by demonstration, in this paper we introduce a nonparametric Bayesian formulation for the GMR model, the Dirichlet process GMR model. We derive an efficient variational Bayesian inference algorithm for the proposed model, and we experimentally investigate its efficacy as a robot learning by demonstration methodology, considering a number of demanding robot learning by demonstration scenarios.
Carlson T, Demiris Y, 2012, Collaborative Control of a Robotic Wheelchair: Evaluation of Performance, Attention and Workload, IEEE Transactions on Systems, Man and Cybernetics, Part B: Cybernetics, Vol: 42, Pages: 876-888, ISSN: 1083-4419
Powered wheelchair users often struggle to drive safely and effectively and, in more critical cases, can only get around when accompanied by an assistant. To address these issues, we propose a collaborative control mechanism that assists users as and when they require help. The system uses a multiple-hypothesis method to predict the driver's intentions and, if necessary, adjusts the control signals to achieve the desired goal safely. The main emphasis of this paper is on a comprehensive evaluation, where we not only look at the system performance but also, perhaps more importantly, characterize the user performance in an experiment that combines eye tracking with a secondary task. Without assistance, participants experienced multiple collisions while driving around the predefined route. Conversely, when they were assisted by the collaborative controller, not only did they drive more safely but also they were able to pay less attention to their driving, resulting in a reduced cognitive workload. We discuss the importance of these results and their implications for other applications of shared control, such as brain-machine interfaces, where it could be used to compensate for both the low frequency and the low resolution of the user input.
Lee K, Kim TK, Demiris Y, 2012, Learning Reusable Task Components using Hierarchical Activity Grammars with Uncertainties, St. Paul, Minnesota, USA, Publisher: IEEE, Pages: 1994-1999
We present a novel learning method using activity grammars capable of learning reusable task components from a reasonably small number of samples under noisy conditions. Our linguistic approach aims to extract the hierarchical structure of activities which can be recursively applied to help recognize unforeseen, more complicated tasks that share the same underlying structures. To achieve this goal, our method 1) actively searches for frequently occurring action symbols that are subset of input samples to effectively discover the hierarchy, and 2) explicitly takes into account the uncertainty values associated with input symbols due to the noise inherent in low-level detectors. In addition to experimenting with a synthetic dataset to systematically analyze the algorithm's performance, we apply our method in human-led imitation learning environment where a robot learns reusable components of the task from short demonstrations to correctly imitate more complicated, longer demonstrations of the same task category. The results suggest that under reasonable amount of noise, our method is capable to capture the reusable structures of tasks and generalize to cope with recursions.
Ognibene D, Demiris Y, 2012, Attentional shifts during action perception, Publisher: PION LTD, Pages: 1272-1272, ISSN: 0301-0066
Soh H, Demiris Y, 2012, Iterative Temporal Learning and Prediction with the Sparse Online Echo State Gaussian Process, International Joint Conference on Neural Networks, IJCNN, Publisher: IEEE, Pages: 1-8, ISSN: 2161-4393
In this work, we contribute the online echo state gaussian process (OESGP), a novel Bayesian-based online method that is capable of iteratively learning complex temporal dynamics and producing predictive distributions (instead of point predictions). Our method can be seen as a combination of the echo state network with a sparse approximation of Gaussian processes (GPs). Extensive experiments on the one-step prediction task on well-known benchmark problems show that OESGP produced statistically superior results to current online ESNs and state-of-the-art regression methods. In addition, we characterise the benefits (and drawbacks) associated with the considered online methods, specifically with regards to the trade-off between computational cost and accuracy. For a high-dimensional action recognition task, we demonstrate that OESGP produces high accuracies comparable to a recently published graphical model, while being fast enough for real-time interactive scenarios.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.