Imperial College London

DrLorenzoPicinali

Faculty of EngineeringDyson School of Design Engineering

Reader in Audio Experience Design
 
 
 
//

Contact

 

+44 (0)20 7594 8158l.picinali Website CV

 
 
//

Location

 

Level 1 staff officeDyson BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

75 results found

Lim V, Khan S, Picinali L, 2021, Towards a more accessible cultural heritage: challenges and opportunities in contextualization using 3D sound narratives, Applied Sciences, ISSN: 2076-3417

Journal article

Comunita M, Gerino A, Lim V, Picinali Let al., 2021, Design and Evaluation of a Web- and Mobile-based Binaural Audio Platform for Cultural Heritage, Applied Sciences, ISSN: 2076-3417

Journal article

Engel Alonso-Martinez I, Henry C, Amengual Garí SV, Robinson PW, Picinali Let al., 2021, Perceptual implications of different Ambisonics-based methods for binaural reverberation, Journal of the Acoustical Society of America, Vol: 149, ISSN: 0001-4966

Reverberation is essential for the realistic auralisation of enclosed spaces. However, it can be computationally expensive to render with high fidelity and, in practice, simplified models are typically used to lower costs while preserving perceived quality. Ambisonics-based methods may be employed to this purpose as they allow us to render a reverberant sound field more efficiently by limiting its spatial resolution. The present study explores the perceptual impact of two simplifications of Ambisonics-based binaural reverberation that aim to improve efficiency. First, a “hybrid Ambisonics” approach is proposed in which the direct sound path is generated by convolution with a spatially dense head related impulse response set, separately from reverberation. Second, the reverberant virtual loudspeaker method (RVL) is presented as a computationally efficient approach to dynamically render binaural reverberation for multiple sources with the potential limitation of inaccurately simulating listener's head rotations. Numerical and perceptual evaluations suggest that the perceived quality of hybrid Ambisonics auralisations of two measured rooms ceased to improve beyond the third order, which is a lower threshold than what was found by previous studies in which the direct sound path was not processed separately. Additionally, RVL is shown to produce auralisations with comparable perceived quality to Ambisonics renderings.

Journal article

Hallewell M, Patel H, Salanitri D, Picinali L, Cobb S, Velzen J, DCruz M, Simeone L, Simeone Met al., 2021, Play&tune: user feedback in the development of a serious game for optimizing hearing aid orientation, Ergonomics in Design: The Quarterly of Human Factors Applications, Vol: 29, Pages: 14-24, ISSN: 1064-8046

Many hearing aid (HA) users are dissatisfied with HA performance in social situations. One way to improve HA outcomes is training the users to understand how HAs work. Play&Tune was designed to provide this training and to foster autonomy in hearing rehabilitation. We carried out two prototype evaluations and a prerelease evaluation of Play&Tune with 71 HA users, using an interview or online survey. Users gave detailed feedback on their experiences with the app. Most participants enjoyed learning about HAs and expressed a desire for autonomy over their HA settings. Our case study reinforces the importance of user feedback during app development.

Journal article

Kim C, Lim V, Picinali L, 2020, Investigation into consistency of subjective and objective perceptual selection of non-individual head-related transfer functions, Journal of the Audio Engineering Society, Vol: 68, Pages: 819-831, ISSN: 1549-4950

The binaural technique uses a set of direction-dependent filters known as Head-Related Transfer Functions (HRTFs) in order to create 3D soundscapes through a pair of headphones. Although each HRTF is unique to the person it ismeasured from, due to the cost and complexity of the measurement process pre-measured non-individual HRTFs are generally used. This study investigates whether it is possible for a listener to perceptually select the best-fitting non-individual HRTFs in a consistent manner, using both subjective and objective methods. 16 subjects participated in 3 repeated sessions of binaural listening tests. During each session, participants firstly listened tomoving sound sources spatialized using 7 different non-individual HRTFs and ranked them according to perceived plausibility and externalization (subjective selection). They then performed a localization task with sources spatialized using the same HRTFs (objective selection). In the subjective selection, 3 to 9 participants showed test-retest reliability levels that could be regarded as good or excellent depending on the attribute under question, the source type, and the trajectory. The reliability was better for participants with musical training and critical audio listening experience. In the objective selection, it was not possible to find significant differences between the tested HRTFs based on localization-related performances.

Journal article

Sethi S, Ewers R, Jones N, Signorelli A, Picinali L, Orme CDLet al., 2020, SAFE Acoustics: an open-source, real-time eco-acoustic monitoring network in the tropical rainforests of Borneo, Methods in Ecology and Evolution, Vol: 11, Pages: 1182-1185, ISSN: 2041-210X

1. Automated monitoring approaches offer an avenue to unlocking large‐scale insight into how ecosystems respond to human pressures. However, since data collection and data analyses are often treated independently, there are currently no open‐source examples of end‐to‐end, real‐time ecological monitoring networks. 2. Here, we present the complete implementation of an autonomous acoustic monitoring network deployed in the tropical rainforests of Borneo. Real‐time audio is uploaded remotely from the field, indexed by a central database, and delivered via an API to a public‐facing website.3. We provide the open‐source code and design of our monitoring devices, the central web2py database, and the ReactJS website. Furthermore, we demonstrate an extension of this infrastructure to deliver real‐time analyses of the eco‐acoustic data. 4. By detailing a fully functional, open source, and extensively tested design, our work will accelerate the rate at which fully autonomous monitoring networks mature from technological curiosities, and towards genuinely impactful tools in ecology.

Journal article

Frost E, Porat T, Malhotra P, Picinali Let al., 2020, Collaborative design of a gamified application for auditory-cognitive training, JMIR Human Factors, Vol: 7, ISSN: 2292-9495

Background:Multiple gaming applications under the dementia umbrella for skills such as navigation exist, but there has yet to be an application designed specifically to investigate the role hearing loss may have in the process of cognitive decline. There is a demonstrable gap in utilising serious games to further the knowledge of the potential relationship between hearing loss and dementia.Objective:The aim of this study was to identify the needs, facilitators and barriers in designing a novel auditory-cognitive training gaming application.Methods:A participatory design approach was used to engage key stakeholders across audiology and cognitive disorders specialisms. Two rounds, including paired semi-structured interviews and focus groups were completed and thematically analysed.Results:18 stakeholders participated in total and 6 themes were identified to inform the next stage of the application’s development.Conclusions:The findings can now be implemented into the development of the beta-version of the application. The application will be evaluated against outcome measures of speech listening in noise, cognitive and attentional tasks, quality of life and usability.

Journal article

Sethi SS, Ewers RM, Jones NS, Sleutel J, Shabrani A, Zulkifli N, Picinali Let al., 2020, Soundscapes predict species occurrence in tropical forests, Publisher: Cold Spring Harbor Laboratory

Accurate occurrence data is necessary for the conservation of keystone or endangered species, but acquiring it is usually slow, laborious, and costly. Automated acoustic monitoring offers a scalable alternative to manual surveys, but identifying species vocalisations requires large manually annotated training datasets, and is not always possible (e.g., for silent species). A new, intermediate approach is needed that rapidly predicts species occurrence without requiring extensive labelled data.We investigated whether local soundscapes could be used to infer the presence of 32 avifaunal and seven herpetofaunal species across a tropical forest degradation gradient in Sabah, Malaysia. We developed a machine-learning based approach to characterise species indicative soundscapes, training our models on a coarsely labelled manual point-count dataset.Soundscapes successfully predicted the occurrence of 34 out of the 39 species across the two taxonomic groups, with area under the curve (AUC) metrics of up to 0.87 (Bold-striped Tit-babbler Macronus bornensis). The highest accuracies were achieved for common species with strong temporal occurrence patterns.Soundscapes were a better predictor of species occurrence than above-ground biomass – a metric often used to quantify habitat quality across forest degradation gradients.Synthesis and applications: Our results demonstrate that soundscapes can be used to efficiently predict the occurrence of a wide variety of species. This provides a new direction for audio data to deliver large-scale, accurate assessments of habitat suitability using cheap and easily obtained field datasets.

Working paper

Comunità M, Gerino A, Lim V, Picinali Let al., 2020, PlugSonic: a web- and mobile-based platform for binaural audio and sonic narratives, Publisher: arXiv

PlugSonic is a suite of web- and mobile-based applications for the curationand experience of binaural interactive soundscapes and sonic narratives. It wasdeveloped as part of the PLUGGY EU project (Pluggable Social Platform forHeritage Awareness and Participation) and consists of two main applications:PlugSonic Sample, to edit and apply audio effects, and PlugSonic Soundscape, tocreate and experience binaural soundscapes. The audio processing withinPlugSonic is based on the Web Audio API and the 3D Tune-In Toolkit, while theexploration of soundscapes in a physical space is obtained using Apple's ARKit.In this paper we present the design choices, the user involvement processes andthe implementation details. The main goal of PlugSonic is technologydemocratisation; PlugSonic users - whether institutions or citizens - are allgiven the instruments needed to create, process and experience 3D soundscapesand sonic narrative; without the need for specific devices, external tools(software and/or hardware), specialised knowledge or custom development. Theevaluation, which was conducted with inexperienced users on three tasks -creation, curation and experience - demonstrates how PlugSonic is indeed asimple, effective, yet powerful tool.

Working paper

Sethi S, Jones NS, Fulcher B, Picinali L, Clink DJ, Klinck H, Orme CDLO, Wrege P, Ewers Ret al., 2020, Characterising soundscapes across diverse ecosystems using a universal acoustic feature-set, Proceedings of the National Academy of Sciences of USA, Vol: 117, Pages: 17049-17055, ISSN: 0027-8424

Natural habitats are being impacted by human pressures at an alarming rate. Monitoring these ecosystem-level changes often requires labor-intensive surveys that are unable to detect rapid or unanticipated environmental changes. Here we have developed a generalizable, data-driven solution to this challenge using eco-acoustic data. We exploited a convolutional neural network to embed soundscapes from a variety of ecosystems into a common acoustic space. In both supervised and unsupervised modes, this allowed us to accurately quantify variation in habitat quality across space and in biodiversity through time. On the scale of seconds, we learned a typical soundscape model that allowed automatic identification of anomalous sounds in playback experiments, providing a potential route for real-time automated detection of irregular environmental behavior including illegal logging and hunting. Our highly generalizable approach, and the common set of features, will enable scientists to unlock previously hidden insights from acoustic data and offers promise as a backbone technology for global collaborative autonomous ecosystem monitoring efforts.

Journal article

Vijayasingam A, Frost E, Wilkins J, Gillen L, Premachandra P, Mclaren K, Gilmartin D, Picinali L, Vidal-Diez A, Borsci S, Ni MZ, Tang WY, Morris-Rosendahl D, Harcourt J, Elston C, Simmonds NJ, Shah Aet al., 2020, Tablet and web-based audiometry to screen for hearing loss in adults with cystic fibrosis, Thorax, Vol: 75, Pages: 632-639, ISSN: 0040-6376

INTRODUCTION: Individuals with chronic lung disease (eg, cystic fibrosis (CF)) often receive antimicrobial therapy including aminoglycosides resulting in ototoxicity. Extended high-frequency audiometry has increased sensitivity for ototoxicity detection, but diagnostic audiometry in a sound-booth is costly, time-consuming and requires a trained audiologist. This cross-sectional study analysed tablet-based audiometry (Shoebox MD) performed by non-audiologists in an outpatient setting, alongside home web-based audiometry (3D Tune-In) to screen for hearing loss in adults with CF. METHODS: Hearing was analysed in 126 CF adults using validated questionnaires, a web self-hearing test (0.5 to 4 kHz), tablet (0.25 to 12 kHz) and sound-booth audiometry (0.25 to 12 kHz). A threshold of ≥25 dB hearing loss at ≥1 audiometric frequency was considered abnormal. Demographics and mitochondrial DNA sequencing were used to analyse risk factors, and accuracy and usability of hearing tests determined. RESULTS: Prevalence of hearing loss within any frequency band tested was 48%. Multivariate analysis showed age (OR 1.127; (95% CI: 1.07 to 1.18; p value<0.0001) per year older) and total intravenous antibiotic days over 10 years (OR 1.006; (95% CI: 1.002 to 1.010; p value=0.004) per further intravenous day) were significantly associated with increased risk of hearing loss. Tablet audiometry had good usability, was 93% sensitive, 88% specific with 94% negative predictive value to screen for hearing loss compared with web self-test audiometry and questionnaires which had poor sensitivity (17% and 13%, respectively). Intraclass correlation (ICC) of tablet versus sound-booth audiometry showed high correlation (ICC >0.9) at all frequencies ≥4 kHz. CONCLUSIONS: Adults with CF have a high prevalence of drug-related hearing loss and tablet-based audiometry can be a practical, accurate screening tool within integrated ototoxicity monitoring programmes for early detection.

Journal article

Griffin E, Picinali L, Scase M, 2020, The effectiveness of an interactive audio‐tactile map for the process of cognitive mapping and recall among people with visual impairments, Brain and Behavior, Vol: 10, ISSN: 2162-3279

BackgroundPeople with visual impairments can experience numerous challenges navigating unfamiliar environments. Systems that operate as prenavigation tools can assist such individuals. This mixed‐methods study examined the effectiveness of an interactive audio‐tactile map tool on the process of cognitive mapping and recall, among people who were blind or had visual impairments. The tool was developed with the involvement of visually impaired individuals who additionally provided further feedback throughout this research.MethodsA mixed‐methods experimental design was employed. Fourteen participants were allocated to either an experimental group who were exposed to an audio‐tactile map, or a control group exposed to a verbally annotated tactile map. After five minutes’ exposure, multiple‐choice questions examined participants’ recall of the spatial and navigational content. Subsequent semi‐structured interviews were conducted to examine their views surrounding the study and the product.ResultsThe experimental condition had significantly better overall recall than the control group and higher average scores in all four areas examined by the questions. The interviews suggested that the interactive component offered individuals the freedom to learn the map in several ways and did not restrict them to a sequential and linear approach to learning.ConclusionAssistive technology can reduce challenges faced by people with visual impairments, and the flexible learning approach offered by the audio‐tactile map may be of particular value. Future researchers and assistive technology developers may wish to explore this further.

Journal article

Sethi S, Jones N, Fulcher B, Picinali L, Clink D, Klinck H, Orme D, Wrege P, Ewers Ret al., 2019, Combining machine learning and a universal acoustic feature-set yields efficient automated monitoring of ecosystems, Publisher: bioRxiv

Natural habitats are being impacted by human pressures at an alarming rate. Monitoring these ecosystem-level changes often requires labour-intensive surveys that are unable to detect rapid or unanticipated environmental changes. Here we developed a generalisable, data-driven solution to this challenge using eco-acoustic data. We exploited a convolutional neural network to embed ecosystem soundscapes from a wide variety of biomes into a common acoustic space. In both supervised and unsupervised modes, this allowed us to accurately quantify variation in habitat quality across space and in biodiversity through time. On the scale of seconds, we learned a typical soundscape model that allowed automatic identification of anomalous sounds in playback experiments, paving the way for real-time detection of irregular environmental behaviour including illegal activity. Our highly generalisable approach, and the common set of features, will enable scientists to unlock previously hidden insights from eco-acoustic data and offers promise as a backbone technology for global collaborative autonomous ecosystem monitoring efforts.

Working paper

Steadman M, Kim C, Lestang J-H, Goodman D, Picinali Let al., 2019, Short-term effects of sound localization training in virtual reality, Scientific Reports, Vol: 9, ISSN: 2045-2322

Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain’s ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements (“gamification”) and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion (“active listening”). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.

Journal article

Steadman MA, Kim C, Lestang J-H, Goodman DFM, Picinali Let al., 2019, Short-term effects of sound localization training in virtual reality, Publisher: biorxiv

Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain's ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements ("gamification") and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion ("active listening"). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.

Working paper

Engel Alonso-Martinez I, Henry C, Amengual Gari SV, Robinson PW, Poirier-Quinot D, Picinali Let al., 2019, Perceptual comparison of ambisonics-based reverberation methods in binaural listening, EAA Spatial Audio Signal Processing Symposium

Conference paper

Vijayasingam A, Frost E, Wilkins J, Picinali L, Premachandra P, Gillen L, Morris-Rosendahl D, Ni M, Elston C, Simmonds NJ, Shah Aet al., 2019, S140 Interim results from a prospective study of tablet and web-based audiometry to detect ototoxicity in adults with cystic fibrosis (vol 73, pg A87, 2018), THORAX, Vol: 74, Pages: 723-723, ISSN: 0040-6376

Journal article

Picinali L, Hrafnkelsson R, Reyes-Lecuona A, 2019, The 3D tune-in toolkit VST binaural audio plugin, 2019 AES International Conference on Immersive and Interactive Audio

© 2019 Audio Engineering Society. All rights reserved. This demo paper aims at introducing a novel VST binaural audio plugin based on the 3D Tune-In (3DTI) Toolkit, a multiplatform open-source C++ library which includes several functionalities for headphone-based sound spatialisation, together with generalised hearing aid and hearing loss simulators. The 3DTI Toolkit VST plugin integrates all the binaural spatialisation functionalities of the 3DTI Toolkit for one single audio source, which can be positioned and moved around the listener. The spatialisation is based on direct convolution with any user-imported Head Related Transfer Function (HRTF) set. Interaural Time Differences (ITDs) are customised in real-time according to the listener’s head circumference. Binaural reverberation is performed using a virtual-loudspeakers Ambisonic approach and convolution with user-imported Binaural Room Impulse Responses (BRIRs). Additional processes for near- and far-field sound sources simulations are also included.

Conference paper

Comunità M, Gerino A, Lim V, Picinali Let al., 2019, Web-based binaural audio and sonic narratives for cultural heritage, Conference on Immersive and Interactive Audio, Publisher: Audio Engineering Society

This paper introduces PlugSonic Soundscape and PlugSonic Sample, two web-based applications for the creation and experience of binaural interactive audio narratives and soundscapes. The apps are being developed as part of the PLUGGY EU project (Pluggable Social Platform for Heritage Awareness and Participation). The apps audio processing is based on the Web Audio API and the 3D Tune-In toolkit. Within the paper, we report on the implementation, evaluation and future developments. We believe that the idea of a web-based application for 3D sonic narratives represents a novel contribution to the cultural heritage, digital storytelling and 3D audio technology domains.

Conference paper

Cuevas-Rodríguez M, Picinali L, González-Toledo D, Garre C, de la Rubia-Cuestas E, Molina-Tanco L, Reyes-Lecuona Aet al., 2019, 3D Tune-In Toolkit: An open-source library for real-time binaural spatialisation, PLoS ONE, Vol: 14, ISSN: 1932-6203

The 3D Tune-In Toolkit (3DTI Toolkit) is an open-source standard C++ library which includes a binaural spatialiser. This paper presents the technical details of this renderer, outlining its architecture and describing the processes implemented in each of its components. In order to put this description into context, the basic concepts behind binaural spatialisation are reviewed through a chronology of research milestones in the field in the last 40 years. The 3DTI Toolkit renders the anechoic signal path by convolving sound sources with Head Related Impulse Responses (HRIRs), obtained by interpolating those extracted from a set that can be loaded from any file in a standard audio format. Interaural time differences are managed separately, in order to be able to customise the rendering according the head size of the listener, and to reduce comb-filtering when interpolating between different HRIRs. In addition, geometrical and frequency-dependent corrections for simulating near-field sources are included. Reverberation is computed separately using a virtual loudspeakers Ambisonic approach and convolution with Binaural Room Impulse Responses (BRIRs). In all these processes, special care has been put in avoiding audible artefacts produced by changes in gains and audio filters due to the movements of sources and of the listener. The 3DTI Toolkit performance, as well as some other relevant metrics such as non-linear distortion, are assessed and presented, followed by a comparison between the features offered by the 3DTI Toolkit and those found in other currently available open- and closed-source binaural renderers.

Journal article

Stitt P, Picinali L, Katz BFG, 2019, Auditory accommodation to poorly matched non-individual spectral localization cues through active learning, Scientific Reports, Vol: 9, Pages: 1-14, ISSN: 2045-2322

This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.

Journal article

Engel Alonso-Martinez I, Goodman D, Picinali L, 2019, The Effect of Auditory Anchors on Sound Localization: A Preliminary Study, 2019 AES International Conference on Immersive and Interactive Audio

Conference paper

Reyes-Lecuona A, Cuevas-Rodríguez M, González-Toledo D, Garre C, De-La-Rubia-Cuestas E, Molina-Tanco L, Rodríguez-Rivero Á, Picinali Let al., 2019, The 3D Tune-In Toolkit: A C++ library for binaural spatialisation, and hearing loss / hearing AIDS emulation, 23rd International Congress on Acoustics, Pages: 2217-2218, ISSN: 2226-7808

© 2019 Proceedings of the International Congress on Acoustics. All rights reserved. This contribution presents the 3D Tune-In (3DTI) Toolkit, an open source C++ library for binaural spatialisation which includes hearing loss and hearing aid emulators. Binaural spatialisation is performed through convolution with user-imported Head Related Transfer Functions (HRTFs) and Binaural Room Impulse Responses (BRIRs), including additional functionalities such as near- and far-field source simulation, customisation of Interaural Time Differences (ITDs), and Ambisonic-based binaural reverberation. Hearing loss is simulated through gammatone filters and multiband expanders/compressors, including advanced non-linear features such as frequency smearing and temporal distortion. A generalised hearing aid simulator is also included, with functionalities such as dynamic equalisation, calibration from user-inputted audiograms, and directional processing.

Conference paper

Picinali L, Rodriguez MC, Toledo DG, Lecuona ARet al., 2019, Speech-in-noise performances in virtual cocktail party using different non-individual Head Related Transfer Functions, 23rd International Congress on Acoustics, Pages: 2158-2159, ISSN: 2226-7808

© 2019 Proceedings of the International Congress on Acoustics. All rights reserved. It is widely accepted that, within the binaural spatialisation domain, the choice of the Head Related Transfer Functions (HRTFs) can have an impact on localisation accuracy and, more in general, realism and sound sources externalisation. The impact of the HRTF choice on speech-in-noise performances in cocktail party scenarios has though not yet been investigated in depth. Within a binaurally-rendered virtual environment, Speech Reception Thresholds (SRTs) with frontal target speaker and lateral noise maskers were measured for 22 subjects several times across different sessions, and using different HRTFs. Results show that for the majority of the tested subjects, significant differences could be found between the SRTs measured using different HRTFs. Furthermore, the HRTFs leading to better or worse SRT performances were not the same across the subjects, indicating that the choice of the HRTF can indeed have an impact on speech-in-noise performances within the tested conditions. These results suggest that when testing speech-in-noise performances within binaurally-rendered virtual environments, the choice of the HRTF should be carefully considered. Furthermore a recommendation should be made for future modelling of the speech-in-noise perception mechanisms to include monoaural spectral cues in addition to interaural differences.

Conference paper

Picinali L, Corbetto MS, Vickers D, 2019, Spatial release from masking assessment in virtual reality for bilateral cochlear implants users, 23rd International Congress on Acoustics, Pages: 7647-7650, ISSN: 2226-7808

© 2019 Proceedings of the International Congress on Acoustics. All rights reserved. In addition to enhanced sound localisation abilities, one of the potential benefits of having bilateral cochlear implants is the improvement of speech-in-noise perception, as bilateral auditory inputs potentially allow the spatial separation of speech and noise sources. This process is known as spatial release from masking (SRM). When assisting bilateral cochlear implant users, it is essential to assess binaural hearing performance in a time-efficient manner, trying to maintain some of the real-world complexities, e.g. multiple sources and reverberation. Traditional tests of SRM and/or localization are often time consuming, and typically assess these in unrealistic settings, with only one controlled stimulus to attend to. With this in mind, Bizley and colleagues (2015) developed the Spatial Speech in Noise test (SSiN), which allows for the simultaneous evaluation of speech discrimination for various locations, SRM, and relative localisation measures using speech in a background babble. Within the BEARs (Both EARs) project, we have recently developed a Virtual Reality-based binaural training applications suite aimed at improving spatial hearing, speech-in-noise perception and ease of listening for bilateral cochlear implantees. In order to assess the impact of training on SRM and localisation without requiring custom spaces and expensive loudspeakers arrays, we have developed a 3D headphones-based implementation of the SSiN test. This article includes details of this implementation.

Conference paper

Comunita M, Picinali L, 2019, Estimating ear canal volume through electrical impedance measurements from in-ear headphones - initial results, AES International Conference on Headphone Technology, Publisher: AUDIO ENGINEERING SOC INC

Conference paper

Scase M, Griffin E, Picinali L, 2019, Pre-Navigation via Interactive Audio Tactile Maps to Promote the Wellbeing of Visually Impaired People., Stud Health Technol Inform, Vol: 260, Pages: 170-177

BACKGROUND: Pre-navigational tools can assist visually impaired people when navigating unfamiliar environments. Assistive technology products (eg tactile maps or auditory simulations) can stimulate cognitive mapping processes to provide navigational assistance in these people. OBJECTIVES: We compared how well blind and visually impaired people could learn a map presented via a tablet computer auditory tactile map (ATM) in contrast to a conventional tactile map accompanied by a text description objectives. METHODS: Performance was assessed with a multiple choice test that quizzed participants on orientation and spatial awareness. Semi-structured interviews explored participant experiences and preferences. RESULTS: A statistically significant difference was found between the conditions with participants using the ATM performing much better than those who used a conventional tactile map and text description. Participants preferred the flexibility of learning of the ATM. CONCLUSION: This computer-based ATM provided an effective, easy to use and cost-effective way of enabling blind and partially sighted people learn a cognitive map and enhance their wellbeing.

Journal article

Cuevas-Rodriguez M, Gonzalez-Toledo D, La Rubia-Cuestas ED, Garre C, Molina-Tanco L, Reyes-Lecuona A, Poirier-Quinot D, Picinali Let al., 2018, The 3D Tune-In Toolkit - 3D audio spatialiser, hearing loss and hearing aid simulations

The 3DTI Toolkit is a standard C++ library for audio spatialisation and simulation using loudspeakers or headphones developed within the 3D Tune-In (3DTI) project (http://www.3d-tune-in.eu), which aims at using 3D sound and simulating hearing loss and hearing aids within virtual environments and games. The Toolkit allows the design and rendering of highly realistic and immersive 3D audio, and the simulation of virtual hearing aid devices and of different typologies of hearing loss. The library includes a real-time 3D binaural audio renderer offering full 3D spatialization based on efficient Head Related Transfer Function (HRTF) convolution, including smooth interpolation among impulse responses, customization of listener head radius and specific simulation of far-distance and near-field effects. In addition, spatial reverberation is simulated in real time using a uniformly partitioned convolution with Binaural Room Impulse Responses (BRIRs) employing a virtual Ambisonic approach. The 3D Tune-In Toolkit includes also a loudspeaker-based spatialiser implemented using Ambisonic encoding/decoding. This poster presents a brief overview of the main features of the Toolkit, which is released open-source under GPL v3 license (the code is available in GitHub https://github.com/3DTune-In/3dti-AudioToolkit).

Conference paper

Sethi S, Ewers R, Jones N, Orme D, Picinali Let al., 2018, Robust, real-time and autonomous monitoring of ecosystems with an open, low-cost, networked device, Methods in Ecology and Evolution, Vol: 9, Pages: 2383-2387, ISSN: 2041-210X

1. Automated methods of monitoring ecosystems provide a cost-effective way to track changes in natural system's dynamics across temporal and spatial scales. However, methods of recording and storing data captured from the field still require significant manual effort. 2. Here we introduce an open source, inexpensive, fully autonomous ecosystem monitoring unit for capturing and remotely transmitting continuous data streams from field sites over long time-periods. We provide a modular software framework for deploying various sensors, together with implementations to demonstrate proof of concept for continuous audio monitoring and time-lapse photography. 3. We show how our system can outperform comparable technologies for fractions of the cost, provided a local mobile network link is available. The system is robust to unreliable network signals and has been shown to function in extreme environmental conditions, such as in the tropical rainforests of Sabah, Borneo. 4. We provide full details on how to assemble the hardware, and the open-source software. Paired with appropriate automated analysis techniques, this system could provide spatially dense, near real-time, continuous insights into ecosystem and biodiversity dynamics at a low cost.

Journal article

Vijayasingam A, Shah A, Simmonds NJ, Elston C, Frost E, Wilkins J, Picinali L, Premachandra P, Gillen L, Morris-Rosendahl D, Ni Met al., 2018, INTERIM RESULTS FROM A PROSPECTIVE STUDY OF TABLET AND WEB-BASED AUDIOMETRY TO DETECT OTOTOXICITY IN ADULTS WITH CYSTIC FIBROSIS, THORAX, Vol: 73, Pages: A87-A88, ISSN: 0040-6376

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00851530&limit=30&person=true