60 results found
Hallewell M, Patel H, Salanitri D, et al., 2020, Play&tune: user feedback in the development of a serious game for optimizing hearing aid orientation, Ergonomics in Design: The Quarterly of Human Factors Applications, Pages: 106480461989955-106480461989955, ISSN: 1064-8046
Many hearing aid (HA) users are dissatisfied with HA performance in social situations. One way to improve HA outcomes is training the users to understand how HAs work. Play&Tune was designed to provide this training and to foster autonomy in hearing rehabilitation. We carried out two prototype evaluations and a prerelease evaluation of Play&Tune with 71 HA users, using an interview or online survey. Users gave detailed feedback on their experiences with the app. Most participants enjoyed learning about HAs and expressed a desire for autonomy over their HA settings. Our case study reinforces the importance of user feedback during app development.
Sethi S, Jones N, Fulcher B, et al., 2019, Combining machine learning and a universal acoustic feature-set yields efficient automated monitoring of ecosystems, Publisher: bioRxiv
Natural habitats are being impacted by human pressures at an alarming rate. Monitoring these ecosystem-level changes often requires labour-intensive surveys that are unable to detect rapid or unanticipated environmental changes. Here we developed a generalisable, data-driven solution to this challenge using eco-acoustic data. We exploited a convolutional neural network to embed ecosystem soundscapes from a wide variety of biomes into a common acoustic space. In both supervised and unsupervised modes, this allowed us to accurately quantify variation in habitat quality across space and in biodiversity through time. On the scale of seconds, we learned a typical soundscape model that allowed automatic identification of anomalous sounds in playback experiments, paving the way for real-time detection of irregular environmental behaviour including illegal activity. Our highly generalisable approach, and the common set of features, will enable scientists to unlock previously hidden insights from eco-acoustic data and offers promise as a backbone technology for global collaborative autonomous ecosystem monitoring efforts.
Steadman M, Kim C, Lestang J-H, et al., 2019, Short-term effects of sound localization training in virtual reality, Scientific Reports, Vol: 9, ISSN: 2045-2322
Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain’s ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements (“gamification”) and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion (“active listening”). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.
Steadman MA, Kim C, Lestang J-H, et al., 2019, Short-term effects of sound localization training in virtual reality, Publisher: biorxiv
Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain's ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements ("gamification") and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion ("active listening"). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.
Vijayasingam A, Frost E, Wilkins J, et al., 2019, S140 Interim results from a prospective study of tablet and web-based audiometry to detect ototoxicity in adults with cystic fibrosis (vol 73, pg A87, 2018), THORAX, Vol: 74, Pages: 723-723, ISSN: 0040-6376
Comunità M, Gerino A, Lim V, et al., 2019, Web-based binaural audio and sonic narratives for cultural heritage, Conference on Immersive and Interactive Audio, Publisher: Audio Engineering Society
This paper introduces PlugSonic Soundscape and PlugSonic Sample, two web-based applications for the creation and experience of binaural interactive audio narratives and soundscapes. The apps are being developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation). The apps audio processing is based on the Web Audio API and the 3D Tune-In toolkit. Within the paper, we report on the implementation, evaluation and future developments. We believe that the idea of a web-based application for 3D sonic narratives represents a novel contribution to the cultural heritage, digital storytelling and 3D audio technology domains.
Picinali L, Hrafnkelsson R, Reyes-Lecuona A, 2019, The 3D tune-in toolkit VST binaural audio plugin, 2019 AES International Conference on Immersive and Interactive Audio
© 2019 Audio Engineering Society. All rights reserved. This demo paper aims at introducing a novel VST binaural audio plugin based on the 3D Tune-In (3DTI) Toolkit, a multiplatform open-source C++ library which includes several functionalities for headphone-based sound spatialisation, together with generalised hearing aid and hearing loss simulators. The 3DTI Toolkit VST plugin integrates all the binaural spatialisation functionalities of the 3DTI Toolkit for one single audio source, which can be positioned and moved around the listener. The spatialisation is based on direct convolution with any user-imported Head Related Transfer Function (HRTF) set. Interaural Time Differences (ITDs) are customised in real-time according to the listener’s head circumference. Binaural reverberation is performed using a virtual-loudspeakers Ambisonic approach and convolution with user-imported Binaural Room Impulse Responses (BRIRs). Additional processes for near- and far-field sound sources simulations are also included.
Cuevas-Rodríguez M, Picinali L, González-Toledo D, et al., 2019, 3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation, PLoS ONE, Vol: 14, ISSN: 1932-6203
The 3D Tune-In Toolkit (3DTI Toolkit) is an open-source standard C++ library which includes a binaural spatialiser. This paper presents the technical details of this renderer, outlining its architecture and describing the processes implemented in each of its components. In order to put this description into context, the basic concepts behind binaural spatialisation are reviewed through a chronology of research milestones in the field in the last 40 years. The 3DTI Toolkit renders the anechoic signal path by convolving sound sources with Head Related Impulse Responses (HRIRs), obtained by interpolating those extracted from a set that can be loaded from any file in a standard audio format. Interaural time differences are managed separately, in order to be able to customise the rendering according the head size of the listener, and to reduce comb-filtering when interpolating between different HRIRs. In addition, geometrical and frequency-dependent corrections for simulating near-field sources are included. Reverberation is computed separately using a virtual loudspeakers Ambisonic approach and convolution with Binaural Room Impulse Responses (BRIRs). In all these processes, special care has been put in avoiding audible artefacts produced by changes in gains and audio filters due to the movements of sources and of the listener. The 3DTI Toolkit performance, as well as some other relevant metrics such as non-linear distortion, are assessed and presented, followed by a comparison between the features offered by the 3DTI Toolkit and those found in other currently available open- and closed-source binaural renderers.
Stitt P, Picinali L, Katz BFG, 2019, Auditory accommodation to poorly matched non-individual spectral localization cues through active learning, Scientific Reports, Vol: 9, ISSN: 2045-2322
This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.
Engel Alonso-Martinez I, Goodman D, Picinali L, The Effect of Auditory Anchors on Sound Localization: A Preliminary Study, 2019 AES International Conference on Immersive and Interactive Audio
Comunità M, Picinali L, 2019, Estimating ear canal volume through electrical impedance measurements from in-ear headphones - Initial results
© 2019 Audio Engineering Society. All rights reserved. The acoustics of the external ear has been extensively studied in areas such as audiology and 3D audio. In other contexts, electrical impedance measurements found application in the analysis, design and performance control of electro-acoustic transducers. This research investigates the links between ear canal acoustics and electrical impedance of in-ear headphones. The primary goal was to examine the effect of the ear canal dimensions on impedance and pressure at the eardrum. Impedance and pressure were measured using ear canal simulators of different size and shape. Results show similarities between the two quantities, with a clear relation between main resonant peak and volume. This paper gives insights into the potential use of electrical impedance to extract information about the external ear.
Scase M, Griffin E, Picinali L, 2019, Pre-Navigation via Interactive Audio Tactile Maps to Promote the Wellbeing of Visually Impaired People., Stud Health Technol Inform, Vol: 260, Pages: 170-177
BACKGROUND: Pre-navigational tools can assist visually impaired people when navigating unfamiliar environments. Assistive technology products (eg tactile maps or auditory simulations) can stimulate cognitive mapping processes to provide navigational assistance in these people. OBJECTIVES: We compared how well blind and visually impaired people could learn a map presented via a tablet computer auditory tactile map (ATM) in contrast to a conventional tactile map accompanied by a text description objectives. METHODS: Performance was assessed with a multiple choice test that quizzed participants on orientation and spatial awareness. Semi-structured interviews explored participant experiences and preferences. RESULTS: A statistically significant difference was found between the conditions with participants using the ATM performing much better than those who used a conventional tactile map and text description. Participants preferred the flexibility of learning of the ATM. CONCLUSION: This computer-based ATM provided an effective, easy to use and cost-effective way of enabling blind and partially sighted people learn a cognitive map and enhance their wellbeing.
Cuevas-Rodriguez M, Gonzalez-Toledo D, La Rubia-Cuestas ED, et al., 2018, The 3D Tune-In Toolkit - 3D audio spatialiser, hearing loss and hearing aid simulations
© 2018 IEEE. The 3DTI Toolkit is a standard C++ library for audio spatialisation and simulation using loudspeakers or headphones developed within the 3D Tune-In (3DTI) project (http://www.3d-tune-in.eu), which aims at using 3D sound and simulating hearing loss and hearing aids within virtual environments and games. The Toolkit allows the design and rendering of highly realistic and immersive 3D audio, and the simulation of virtual hearing aid devices and of different typologies of hearing loss. The library includes a real-time 3D binaural audio renderer offering full 3D spatialization based on efficient Head Related Transfer Function (HRTF) convolution, including smooth interpolation among impulse responses, customization of listener head radius and specific simulation of far-distance and near-field effects. In addition, spatial reverberation is simulated in real time using a uniformly partitioned convolution with Binaural Room Impulse Responses (BRIRs) employing a virtual Ambisonic approach. The 3D Tune-In Toolkit includes also a loudspeaker-based spatialiser implemented using Ambisonic encoding/decoding. This poster presents a brief overview of the main features of the Toolkit, which is released open-source under GPL v3 license (the code is available in GitHub https://github.com/3DTune-In/3dti-AudioToolkit).
Vijayasingam A, Shah A, Simmonds NJ, et al., 2018, INTERIM RESULTS FROM A PROSPECTIVE STUDY OF TABLET AND WEB-BASED AUDIOMETRY TO DETECT OTOTOXICITY IN ADULTS WITH CYSTIC FIBROSIS, THORAX, Vol: 73, Pages: A87-A88, ISSN: 0040-6376
Sethi S, Ewers R, Jones N, et al., 2018, Robust, real-time and autonomous monitoring of ecosystems with an open, low-cost, networked device, Methods in Ecology and Evolution, Vol: 9, Pages: 2383-2387, ISSN: 2041-210X
1. Automated methods of monitoring ecosystems provide a cost-effective way to track changes in natural system's dynamics across temporal and spatial scales. However, methods of recording and storing data captured from the field still require significant manual effort. 2. Here we introduce an open source, inexpensive, fully autonomous ecosystem monitoring unit for capturing and remotely transmitting continuous data streams from field sites over long time-periods. We provide a modular software framework for deploying various sensors, together with implementations to demonstrate proof of concept for continuous audio monitoring and time-lapse photography. 3. We show how our system can outperform comparable technologies for fractions of the cost, provided a local mobile network link is available. The system is robust to unreliable network signals and has been shown to function in extreme environmental conditions, such as in the tropical rainforests of Sabah, Borneo. 4. We provide full details on how to assemble the hardware, and the open-source software. Paired with appropriate automated analysis techniques, this system could provide spatially dense, near real-time, continuous insights into ecosystem and biodiversity dynamics at a low cost.
Frangakis N, Lim V, Tanco LM, et al., 2018, PLUGGY: A pluggable social platform for cultural heritage awareness and participation, CEUR Workshop, Pages: 21-30, ISSN: 1613-0073
Copyright held by the author(s). One of the preconditions for genuine sustainability is a heritage that is present anywhere and anytime in everyday life. We present PLUGGY, a Pluggable Social Platform for Heritage Awareness and Participation. PLUGGY will address the need of society to be actively involved in cultural heritage activities, not only as an observer but also as a creator and a major influencing factor. With PLUGGY, we aim to bridge this gap by providing the tools needed to allow users to share their local knowledge and everyday experience with others, together with the contribution of cultural institutions. Users will be able to build extensive networks around a common area of interest, connecting the past, the present and the future. It will be powered by its users and puts people's values, aspirations and needs first. Users of PLUGGY will be the providers of information about cultural heritage in the everyday and ordinary, real life. Through its social platform and by using its innovative curation tools, designed to solely focus on a niche area in social media, citizens will be able to act as skilled storytellers by creating fascinating personalised stories and share them through social networking with friends, associates and professionals. In this paper, we describe a structured formative and summative evaluative approach of PLUGGY’s core concepts and the results will be used to inform and improve its design.
Kim C, Steadman M, Lestang JH, et al., 2018, A VR-based mobile platform for training to non-individualized binaural 3D audio, 144th Audio Engineering Society Convention 2018
© 2018 Audio Engineering Society. All Rights Reserved. Delivery of immersive 3D audio with arbitrarily-positioned sound sources over headphones often requires processing of individual source signals through a set of Head-Related Transfer Functions (HRTFs). The individual morphological differences and the impracticality of HRTF measurement make it difficult to deliver completely individualized 3D audio, and instead lead to the use of previously-measured non-individual sets of HRTFs. In this study, a VR-based mobile sound localization training prototype system is introduced which uses HRTF sets for audio. It consists of a mobile phone as a head-mounted device, a hand-held Bluetooth controller, and a network-enabled laptop with a USB audio interface and a pair of headphones. The virtual environment was developed on the mobile phone such that the user can listen-to/navigate-in an acoustically neutral scene and locate invisible target sound sources presented at random directions using non-individualized HRTFs in repetitive sessions. Various training paradigms can be designed with this system, with performance-related feedback provided according to the user’s localization accuracy, including visual indication of the target location, and some aspects of a typical first-person shooting game, such as enemies, scoring, and level advancement. An experiment was conducted using this system, in which 11 subjects went through multiple training sessions, using non-individualized HRTF sets. The localization performance evaluations showed reduction of overall localization angle error over repeated training sessions, reflecting lower front-back confusion rates.
Lim V, Frangakis N, Molina Tanco L, et al., PLUGGY: A Pluggable Social Platform for Cultural Heritage Awareness and Participation, International Workshop on Analysis in Digital Cultural Heritage 2017
D'Cruz M, Patel H, Hallewell M, et al., 2017, Novel 3D games for people with and without hearing loss, 9th International Conference on Virtual Worlds and Games for Serious Applications, VS-Games 2017, Pages: 175-176
© 2017 IEEE. Over 90 million people in Europe currently suffer with hearing loss and with an aging community this is expected to rise significantly. Digital Hearing AIDS (HAs) offer real opportunities to enhance hearing capability in different acoustic contexts, however understanding functionalities and calibration can seem overly complex. The 3D Tune-In project has developed a 3D toolkit including a sound spatialisation algorithm and hearing/hearing loss simulators as the basis of five novel digital games addressing challenges of hearing loss and hearing education for children and older adults. Early evaluations have demonstrated the opportunities for hearing impaired groups, as well as the digital games community.
Isaac Engel J, Picinali L, 2017, Long-term user adaptation to an audio augmented reality system, SOUND AND VIBRATION. INTERNATIONAL CONGRESS. 24TH 2017
Audio Augmented Reality (AAR) consists in extending a real auditory environment with virtual sound sources. This can be achieved using binaural earphones/microphones. The microphones, placed in the outer part of each earphone, record sounds from the user's environment, which are then mixed with virtual binaural audio, and the resulting signal is finally played back through the earphones. However, previous studies show that, with a system of this type, audio coming from the microphones (or hear-through audio) does not sound natural to the user. The goal of this study is to explore the capabilities of long-term user adaptation to an AAR system built with off-the-shelf components (a pair of binaural microphones/earphones and a smartphone), aiming at achieve perceived realism for the hear-through audio. To compensate the acoustical effects of ear canal occlusion, the recorded signal is equalised in the smartphone. In-out latency was minimised to avoid distortion caused by comb filtering effect. To evaluate the adaptation process of the users to the headset, two case studies were performed. The subjects wore an AAR headset for several days while performing daily tests to check the progress of the adaptation. Both quantitative and qualitative evaluations (i.e., localising real and virtual sound sources and analysing the perception of pre-recorded auditory scenes) were carried out, finding slight signs of adaptation, especially in the subjective tests. A demo will be available for the conference visitors, including also the integration of visual Augmented Reality functionalities.
Picinali L, Wallin A, Levtov Y, et al., 2017, Comparative perceptual evaluation between different methods for implementing Reverberation in a binaural context, AES 2017, Publisher: Audio Engineering Society
Reverberation has always been considered of primary importance in order to improve the realism, externalisation and immersiveness of binaurally spatialised sounds. Different techniques exist for implementing reverberation in a binaural context, each with a different level of computational complexity and spatial accuracy. A perceptual study has been performed in order to compare between the realism and localization accuracy achieved using 5 different binaural reverberation techniques. These included multichannel Ambisonic-based, stereo and mono reverberation methods. A custom web-based application has been developed implementing the testing procedures, and allowing participants to take the test remotely. Initial results with 54 participants show that no major difference in terms of perceived level of realism and spatialisation accuracy could be found between four of the five proposed reverberation methods, suggesting that a high level of complexity in the reverberation process does not always correspond to improved perceptual attributes.
Mascetti S, Gerino A, Bernareggi C, et al., 2017, On the evaluation of novel sonification techniques for non visual shape exploration, ACM Transactions on Accessible Computing, Vol: 9, Pages: 13.1-13.28, ISSN: 1936-7228
There are several situations in which a person with visual impairment or blindness needs to extract infor-mation from an image. For example, graphical representations are often used in education, in particular inSTEM subjects. In this contribution we propose a set of 6 soni cation techniques to support individualswith visual impairment or blindness in recognizing shapes on touchscreen devices. These techniques arecompared among themselves and with 2 other soni cation techniques already proposed in the literature.UsingInvisible Puzzle, a mobile application which allows to conduct non-supervised evaluation sessions, weconducted tests with 49 subjects with visual impairment and blindness, and 178 sighted subjects. All sub-jects involved in the process successfully completed the evaluation session, showing high level of engagement,demonstrating therefore the e ectiveness of the evaluation procedure. Results give interesting insights intothe di erences among the soni cation techniques and, most importantly, show that after a short trainingsubjects are able to successfully identify several di erent shapes.
Patel H, Cobb S, Hallewell M, et al., 2016, User involvement in design and application of virtual reality gamification to facilitate the use of hearing AIDS, Pages: 77-81
© 2016 IEEE. The 3D Tune-In project aims to create an innovative toolkit based on 3D sound, visuals and gamification techniques to facilitate different target audiences in understanding and using the varied settings of their hearing aid to attain optimum performance in different social contexts. In the early stages of project development, hearing aid (HA) users participated in activities to identify user requirements regarding the difficulties and issues they face in everyday situations due to their hearing loss. The findings from questionnaire and interview studies and identification of current personas and scenarios of use indicate that the project can clearly and distinctly s the requirements of people with hearing loss as well as improve the general public's understanding of hearing loss. Five Future Scenarios of use have been derived to describe how the technologies and games to be developed by the 3D Tune-In project will address these requirements.
Mascetti S, Rossetti C, Gerino A, et al., 2016, Towards a Natural User Interface to Support People with Visual Impairments in Detecting Colors, 15th International Conference, ICCHP 2016, Publisher: Springer, Pages: 171-178, ISSN: 0302-9743
A mobile application that detects an item’s color is potentially very useful for visually impaired people. However, users could run into difficulties when centering the target item in the mobile device camera field of view. To address this problem, in this contribution we propose a mobile application that detects the color of the item pointed by the user with one finger. In its current version, the application requires the user to wear a marker on the finger used for pointing. A preliminary evaluation conducted with blind users confirms the usefulness of the application, and encourages further development.
Eastgate R, Picinali L, Patel H, et al., 2016, 3D Games for Tuning and Learning about Hearing Aids, Hearing Journal, Vol: 69, Pages: 30-32, ISSN: 0745-7472
Levtov Y, Picinali L, D'Cruz M, et al., Audio Engineering Society Convention, 140th Audio Engineering Society Convention
Mascetti S, Picinali L, Gerino A, et al., 2016, Sonification of guidance data during road crossing for people with visual impairments or blindness, International Journal of Human Computer Studies, Vol: 85, Pages: 16-26, ISSN: 1071-5819
In the last years several solutions were proposed to support people with visual impairments or blindness during road crossing. These solutions focus on computer vision techniques for recognizing pedestrian crosswalks and computing their relative position from the user. Instead, this contribution addresses a different problem; the design of an auditory interface that can effectively guide the user during road crossing. Two original auditory guiding modes based on data sonification are presented and compared with a guiding mode based on speech messages.Experimental evaluation shows that there is no guiding mode that is best suited for all test subjects. The average time to align and cross is not significantly different among the three guiding modes, and test subjects distribute their preferences for the best guiding mode almost uniformly among the three solutions. From the experiments it also emerges that higher effort is necessary for decoding the sonified instructions if compared to the speech instructions, and that test subjects require frequent 'hints' (in the form of speech messages). Despite this, more than 2/3 of test subjects prefer one of the two guiding modes based on sonification. There are two main reasons for this: firstly, with speech messages it is harder to hear the sound of the environment, and secondly sonified messages convey information about the "quantity" of the expected movement.
Iliya S, Menzies D, Neri F, et al., 2015, Robust impaired speech segmentation using neural network mixture model, ISSPIT 2014, Pages: 000444-000449
This paper presents a signal processing technique for segmenting short speech utterances into unvoiced and voiced sections and identifying points where the spectrum becomes steady. The segmentation process is part of a system for deriving musculoskeletal articulation data from disordered utterances, in order to provide training feedback for people with speech articulation problem. The approach implement a novel and innovative segmentation scheme using artificial neural network mixture model (ANNMM) for identification and capturing of the various sections of the disordered (impaired) speech signals. This paper also identify some salient features that distinguish normal speech from impaired speech of the same utterances. This research aim at developing artificial speech therapist capable of providing reliable text and audiovisual feed back progress report to the patient.
O'Sullivan L, Picinali L, Gerino A, et al., 2015, A Prototype Audio-Tactile Map System with an Advanced Auditory Display, INTERNATIONAL JOURNAL OF MOBILE HUMAN COMPUTER INTERACTION, Vol: 7, Pages: 53-75, ISSN: 1942-390X
Picinali L, D'Cruz M, Simeone L, 3D-Tune-In: 3D sound, visuals and gamification to facilitate the use of hearing aids, EuroVR Conference 2015
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.