Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • JOURNAL ARTICLE
    Stitt P, Picinali L, Katz BFG, 2019,

    Auditory accommodation to poorly matched non-individual spectral localization cues through active learning

    , Scientific Reports, Vol: 9, ISSN: 2045-2322

    This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.

  • CONFERENCE PAPER
    Moore A, de Haan JM, Pedersen MS, Naylor P, Brookes D, Jensen Jet al.,

    Personalized {HRTF}s for hearing aids

    , ELOBES2019
  • CONFERENCE PAPER
    Cuevas-Rodriguez M, Gonzalez-Toledo D, La Rubia-Cuestas ED, Garre C, Molina-Tanco L, Reyes-Lecuona A, Poirier-Quinot D, Picinali Let al., 2018,

    The 3D Tune-In Toolkit - 3D audio spatialiser, hearing loss and hearing aid simulations

    © 2018 IEEE. The 3DTI Toolkit is a standard C++ library for audio spatialisation and simulation using loudspeakers or headphones developed within the 3D Tune-In (3DTI) project (http://www.3d-tune-in.eu), which aims at using 3D sound and simulating hearing loss and hearing aids within virtual environments and games. The Toolkit allows the design and rendering of highly realistic and immersive 3D audio, and the simulation of virtual hearing aid devices and of different typologies of hearing loss. The library includes a real-time 3D binaural audio renderer offering full 3D spatialization based on efficient Head Related Transfer Function (HRTF) convolution, including smooth interpolation among impulse responses, customization of listener head radius and specific simulation of far-distance and near-field effects. In addition, spatial reverberation is simulated in real time using a uniformly partitioned convolution with Binaural Room Impulse Responses (BRIRs) employing a virtual Ambisonic approach. The 3D Tune-In Toolkit includes also a loudspeaker-based spatialiser implemented using Ambisonic encoding/decoding. This poster presents a brief overview of the main features of the Toolkit, which is released open-source under GPL v3 license (the code is available in GitHub https://github.com/3DTune-In/3dti-AudioToolkit).

  • JOURNAL ARTICLE
    Braiman C, Fridman EA, Conte MM, Voss HU, Reichenbach CS, Reichenbach T, Schiff NDet al., 2018,

    Cortical Response to the Natural Speech Envelope Correlates with Neuroimaging Evidence of Cognition in Severe Brain Injury

    , CURRENT BIOLOGY, Vol: 28, Pages: 3833-+, ISSN: 0960-9822
  • JOURNAL ARTICLE
    Sethi SS, Ewers RM, Jones NS, Orme CDL, Picinali Let al., 2018,

    Robust, real-time and autonomous monitoring of ecosystems with an open, low-cost, networked device

    , METHODS IN ECOLOGY AND EVOLUTION, Vol: 9, Pages: 2383-2387, ISSN: 2041-210X
  • JOURNAL ARTICLE
    Dietz M, Lestang J-H, Majdak P, Stern RM, Marquardt T, Ewert SD, Hartmann WM, Goodman DPMet al., 2018,

    A framework for testing and comparing binaural models

    , HEARING RESEARCH, Vol: 360, Pages: 92-106, ISSN: 0378-5955
  • CONFERENCE PAPER
    Forte AE, Etard OE, Reichenbach JDT, 2018,

    Selective Auditory Attention At The Brainstem Level

    , ARO 2018
  • CONFERENCE PAPER
    Saiz Alia M, Askari A, Forte AE, Reichenbach JDTet al., 2018,

    A model of the human auditory brainstem response to running speech

    , ARO 2018
  • CONFERENCE PAPER
    Kegler M, Etard OE, Forte AE, Reichenbach JDTet al., 2018,

    Complex Statistical Model for Detecting the Auditory Brainstem Response to Natural Speech and for Decoding Attention from High-Density EEG Recordings

    , ARO 2018
  • JOURNAL ARTICLE
    Evers C, Naylor PA, 2018,

    Optimized Self-Localization for SLAM in Dynamic Scenes Using Probability Hypothesis Density Filters

    , IEEE TRANSACTIONS ON SIGNAL PROCESSING, Vol: 66, Pages: 863-878, ISSN: 1053-587X
  • JOURNAL ARTICLE
    Etard O, Kegler M, Braiman C, Forte AE, Reichenbach Tet al., 2018,

    Real-time decoding of selective attention from the human auditory brainstem response to continuous speech

    <jats:p>Humans are highly skilled at analysing complex auditory scenes. Previously we showed that the auditory brainstem response to speech is modulated by selective attention, a result that we achieved through developing a novel method for measuring the brainstem's response to running speech (Forte et al. 2017). Here we demonstrate that the attentional modulation of the brainstem response to speech can be employed to decode the attentional focus of a listener in real time, from short measurements of ten seconds or less in duration. The decoding is based on complex statistical models for extracting the brainstem response from multi-channel scalp recordings and subsequent classification of the model performances according to the focus of attention. We further show how a few recording channels as well as out-of-the-box decoding that employs population average models achieve a high accuracy from short recordings as well.</jats:p>

  • JOURNAL ARTICLE
    Goodman DFM, Winter IM, Leger AC, de Cheveigne A, Lorenzi Cet al., 2018,

    Modelling firing regularity in the ventral cochlear nucleus: Mechanisms, and effects of stimulus level and synaptopathy

    , HEARING RESEARCH, Vol: 358, Pages: 98-110, ISSN: 0378-5955
  • CONFERENCE PAPER
    Lim V, Frangakis N, Tanco LM, Picinali Let al., 2018,

    PLUGGY: A Pluggable Social Platform for Cultural Heritage Awareness and Participation

    , IEEE International Conference on Engineering, Technology, and Innovation, Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 117-129, ISSN: 0302-9743
  • CONFERENCE PAPER
    Moore AH, Lightburn L, Xue W, Naylor PA, Brookes Met al., 2018,

    BINAURAL MASK-INFORMED SPEECH ENHANCEMENT FOR HEARING AIDS WITH HEAD TRACKING

    , 16th International Workshop on Acoustic Signal Enhancement (IWAENC), Publisher: IEEE, Pages: 461-465, ISSN: 2639-4316
  • CONFERENCE PAPER
    Dawson PJ, De Sena E, Naylor PA, 2018,

    An acoustic image-source characterisation of surface profiles

    , European Signal Processing Conference (EUSIPCO), Publisher: IEEE COMPUTER SOC, Pages: 2130-2134, ISSN: 2076-1465
  • JOURNAL ARTICLE
    Ciganovic N, Warren RL, Keceli B, Jacob S, Fridberger A, Reichenbach Tet al., 2018,

    Static length changes of cochlear outer hair cells can tune low-frequency hearing

    , PLOS COMPUTATIONAL BIOLOGY, Vol: 14
  • CONFERENCE PAPER
    Kim C, Steadman M, Lestang JH, Goodman DFM, Picinali Let al., 2018,

    A VR-based mobile platform for training to non-individualized binaural 3D audio

    © 2018 Audio Engineering Society. All Rights Reserved. Delivery of immersive 3D audio with arbitrarily-positioned sound sources over headphones often requires processing of individual source signals through a set of Head-Related Transfer Functions (HRTFs). The individual morphological differences and the impracticality of HRTF measurement make it difficult to deliver completely individualized 3D audio, and instead lead to the use of previously-measured non-individual sets of HRTFs. In this study, a VR-based mobile sound localization training prototype system is introduced which uses HRTF sets for audio. It consists of a mobile phone as a head-mounted device, a hand-held Bluetooth controller, and a network-enabled laptop with a USB audio interface and a pair of headphones. The virtual environment was developed on the mobile phone such that the user can listen-to/navigate-in an acoustically neutral scene and locate invisible target sound sources presented at random directions using non-individualized HRTFs in repetitive sessions. Various training paradigms can be designed with this system, with performance-related feedback provided according to the user’s localization accuracy, including visual indication of the target location, and some aspects of a typical first-person shooting game, such as enemies, scoring, and level advancement. An experiment was conducted using this system, in which 11 subjects went through multiple training sessions, using non-individualized HRTF sets. The localization performance evaluations showed reduction of overall localization angle error over repeated training sessions, reflecting lower front-back confusion rates.

  • JOURNAL ARTICLE
    Forte AE, Etard O, Reichenbach T, 2017,

    The human auditory brainstem response to running speech reveals a subcortical mechanism for selective attention

    , ELIFE, Vol: 6, ISSN: 2050-084X
  • CONFERENCE PAPER
    Forte AE, Etard O, Reichenbach J, 2017,

    Selective auditory attention modulates the human brainstem's response to running speech

    , Basic Auditory Science 2017
  • CONFERENCE PAPER
    Kegler M, Etard O, Forte AE, Reichenbach Jet al., 2017,

    Complex statistical model for detecting the auditory brainstem response to natural speech and for decoding attention

    , Basic Auditory Science 2017

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1047&limit=20&respub-action=search.html Current Millis: 1558464097028 Current Time: Tue May 21 19:41:37 BST 2019