Search or filter publications

Filter by type:

Filter by publication type

Filter by year:

to

Results

  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Sharma D, Hogg A, Wang Y, Nour-Eldin A, Naylor Pet al.,

    Non-Intrusive POLQA estimation of speech quality using recurrent neural networks

    , European Signal Processing Conference (EUSIPCO), Publisher: IEEE

    Estimating the quality of speech without the use of a clean reference signal is a challenging problem, in part due to the time and expense required to collect sufficient training data for modern machine learning algorithms. We present a novel, non-intrusive estimator that exploits recurrent neural network architectures to predict the intrusive POLQA score of a speech signal in a short time context. The predictor is based on a novel compressed representation of modulation domain features, used in conjunction with static MFCC features. We show that the proposed method can reliably predict POLQA with a 300 ms context, achieving a mean absolute error of 0.21 on unseen data.The proposed method is trained using English speech and is shown to generalize well across unseen languages. The neural network also jointly estimates the mean voice activity detection(VAD) with an F1 accuracy score of 0.9, removing the need for an external VAD.

  • Conference paper
    Hogg A, Evers C, Naylor P,

    Multiple hypothesis tracking for overlapping speaker segmentation

    , IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Publisher: IEEE

    Speaker segmentation is an essential part of any diarization system.Applications of diarization include tasks such as speaker indexing, improving automatic speech recognition (ASR) performance and making single speaker-based algorithms available for use in multi-speaker environments.This paper proposes a multiple hypothesis tracking (MHT) method that exploits the harmonic structure associated with the pitch in voiced speech in order to segment the onsets and end-points of speech from multiple, overlapping speakers. The proposed method is evaluated against a segmentation system from the literature that uses a spectral representation and is based on employing bidirectional long short term memory networks (BLSTM). The proposed method is shown to achieve comparable performance for segmenting overlapping speakers only using the pitch harmonic information in the MHT framework.

  • Conference paper
    Hogg A, Naylor P, Evers C, 2019,

    Speaker change detection using fundamental frequency with application to multi-talker segmentation

    , IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Publisher: IEEE

    This paper shows that time varying pitch properties can be used advantageously within the segmentation step of a multi-talker diarization system. First a study is conducted to verify that changes in pitch are strong indicators of changes in the speaker. It is then highlighted that an individual’s pitch is smoothly varying and, therefore, can be predicted by means of a Kalman filter. Subsequently it is shown that if the pitch is not predictable then this is most likely due to a change in the speaker. Finally, a novel system is proposed that uses this approach of pitch prediction for speaker change detection. This system is then evaluated against a commonly used MFCC segmentation system. The proposed system is shown to increase the speaker change detection rate from 43.3% to 70.5% on meetings in the AMI corpus. Therefore, there are two equally weighted contributions in this paper: 1. We address the question of whether a change in pitch is a reliable estimator of a speaker change in multi-talk meeting audio. 2. We develop a method to extract such speaker changes and test them on a widely available meeting corpus.

  • Journal article
    Stitt P, Picinali L, Katz BFG, 2019,

    Auditory accommodation to poorly matched non-individual spectral localization cues through active learning

    , Scientific Reports, Vol: 9, ISSN: 2045-2322

    This study examines the effect of adaptation to non-ideal auditory localization cues represented by the Head-Related Transfer Function (HRTF) and the retention of training for up to three months after the last session. Continuing from a previous study on rapid non-individual HRTF learning, subjects using non-individual HRTFs were tested alongside control subjects using their own measured HRTFs. Perceptually worst-rated non-individual HRTFs were chosen to represent the worst-case scenario in practice and to allow for maximum potential for improvement. The methodology consisted of a training game and a localization test to evaluate performance carried out over 10 sessions. Sessions 1–4 occurred at 1 week intervals, performed by all subjects. During initial sessions, subjects showed improvement in localization performance for polar error. Following this, half of the subjects stopped the training game element, continuing with only the localization task. The group that continued to train showed improvement, with 3 of 8 subjects achieving group mean polar errors comparable to the control group. The majority of the group that stopped the training game retained their performance attained at the end of session 4. In general, adaptation was found to be quite subject dependent, highlighting the limits of HRTF adaptation in the case of poor HRTF matches. No identifier to predict learning ability was observed.

  • Conference paper
    Moore A, de Haan JM, Pedersen MS, Naylor P, Brookes D, Jensen Jet al.,

    Personalized {HRTF}s for hearing aids

    , ELOBES2019
  • Conference paper
    Cuevas-Rodriguez M, Gonzalez-Toledo D, La Rubia-Cuestas ED, Garre C, Molina-Tanco L, Reyes-Lecuona A, Poirier-Quinot D, Picinali Let al., 2018,

    The 3D Tune-In Toolkit - 3D audio spatialiser, hearing loss and hearing aid simulations

    © 2018 IEEE. The 3DTI Toolkit is a standard C++ library for audio spatialisation and simulation using loudspeakers or headphones developed within the 3D Tune-In (3DTI) project (http://www.3d-tune-in.eu), which aims at using 3D sound and simulating hearing loss and hearing aids within virtual environments and games. The Toolkit allows the design and rendering of highly realistic and immersive 3D audio, and the simulation of virtual hearing aid devices and of different typologies of hearing loss. The library includes a real-time 3D binaural audio renderer offering full 3D spatialization based on efficient Head Related Transfer Function (HRTF) convolution, including smooth interpolation among impulse responses, customization of listener head radius and specific simulation of far-distance and near-field effects. In addition, spatial reverberation is simulated in real time using a uniformly partitioned convolution with Binaural Room Impulse Responses (BRIRs) employing a virtual Ambisonic approach. The 3D Tune-In Toolkit includes also a loudspeaker-based spatialiser implemented using Ambisonic encoding/decoding. This poster presents a brief overview of the main features of the Toolkit, which is released open-source under GPL v3 license (the code is available in GitHub https://github.com/3DTune-In/3dti-AudioToolkit).

  • Journal article
    Braiman C, Fridman A, Conte MM, Vosse HU, Reichenbach CS, Reichenbach J, Schiff NDet al., 2018,

    Cortical response to the natural speech envelope correlates with neuroimaging evidence of cognition in severe brain injury

    , Current Biology, Vol: 28, Pages: 3833-3839, ISSN: 1879-0445

    Recent studies identify severely brain-injured patients with limited or no behavioral responses who successfully perform functional magnetic resonance imaging (fMRI) or electroencephalogram (EEG) mental imagery tasks [1, 2, 3, 4, 5]. Such tasks are cognitively demanding [1]; accordingly, recent studies support that fMRI command following in brain-injured patients associates with preserved cerebral metabolism and preserved sleep-wake EEG [5, 6]. We investigated the use of an EEG response that tracks the natural speech envelope (NSE) of spoken language [7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22] in healthy controls and brain-injured patients (vegetative state to emergence from minimally conscious state). As audition is typically preserved after brain injury, auditory paradigms may be preferred in searching for covert cognitive function [23, 24, 25]. NSE measures are obtained by cross-correlating EEG with the NSE. We compared NSE latencies and amplitudes with and without consideration of fMRI assessments. NSE latencies showed significant and progressive delay across diagnostic categories. Patients who could carry out fMRI-based mental imagery tasks showed no statistically significant difference in NSE latencies relative to healthy controls; this subgroup included patients without behavioral command following. The NSE may stratify patients with severe brain injuries and identify those patients demonstrating “cognitive motor dissociation” (CMD) [26] who show only covert evidence of command following utilizing neuroimaging or electrophysiological methods that demand high levels of cognitive function. Thus, the NSE is a passive measure that may provide a useful screening tool to improve detection of covert cognition with fMRI or other methods and improve stratification of patients with disorders of consciousness in research studies.

  • Journal article
    Sethi S, Ewers R, Jones N, Orme D, Picinali Let al., 2018,

    Robust, real-time and autonomous monitoring of ecosystems with an open, low-cost, networked device

    , Methods in Ecology and Evolution, Vol: 9, Pages: 2383-2387, ISSN: 2041-210X

    1. Automated methods of monitoring ecosystems provide a cost-effective way to track changes in natural system's dynamics across temporal and spatial scales. However, methods of recording and storing data captured from the field still require significant manual effort. 2. Here we introduce an open source, inexpensive, fully autonomous ecosystem monitoring unit for capturing and remotely transmitting continuous data streams from field sites over long time-periods. We provide a modular software framework for deploying various sensors, together with implementations to demonstrate proof of concept for continuous audio monitoring and time-lapse photography. 3. We show how our system can outperform comparable technologies for fractions of the cost, provided a local mobile network link is available. The system is robust to unreliable network signals and has been shown to function in extreme environmental conditions, such as in the tropical rainforests of Sabah, Borneo. 4. We provide full details on how to assemble the hardware, and the open-source software. Paired with appropriate automated analysis techniques, this system could provide spatially dense, near real-time, continuous insights into ecosystem and biodiversity dynamics at a low cost.

  • Conference paper
    Dawson PJ, De Sena E, Naylor PA, 2018,

    An acoustic image-source characterisation of surface profiles

    , 2018 26th European Signal Processing Conference (EUSIPCO), Publisher: IEEE, Pages: 2130-2134, ISSN: 2076-1465

    The image-source method models the specular reflection from a plane by means of a secondary source positioned at the source's reflected image. The method has been widely used in acoustics to model the reverberant field of rectangular rooms, but can also be used for general-shaped rooms and non-flat reflectors. This paper explores the relationship between the physical properties of a non-flat reflector and the statistical properties of the associated cloud of image-sources. It is shown here that the standard deviation of the image-sources is strongly correlated with the ratio between depth and width of the reflector's spatial features.

  • Conference paper
    Moore AH, Lightburn L, Xue W, Naylor P, Brookes Det al.,

    Binaural mask-informed speech enhancement for hearing aids with head tracking

    , International Workshop on Acoustic Signal Enhancement (IWAENC 2018), Publisher: IEEE

    An end-to-end speech enhancement system for hearing aids is pro-posed which seeks to improve the intelligibility of binaural speechin noise during head movement. The system uses a reference beam-former whose look direction is informed by knowledge of the headorientation and the a priori known direction of the desired source.From this a time-frequency mask is estimated using a deep neuralnetwork. The binaural signals are obtained using bilateral beam-formers followed by a classical minimum mean square error speechenhancer, modified to use the estimated mask as a speech presenceprobability prior. In simulated experiments, the improvement in abinaural intelligibility metric (DBSTOI) given by the proposed sys-tem relative to beamforming alone corresponds to an SNR improve-ment of 4 to 6 dB. Results also demonstrate the individual contribu-tions of incorporating the mask and the head orientation-aware beamsteering to the proposed system.

  • Conference paper
    Kim C, Steadman M, Lestang JH, Goodman DFM, Picinali Let al., 2018,

    A VR-based mobile platform for training to non-individualized binaural 3D audio

    , 144th Audio Engineering Society Convention 2018

    © 2018 Audio Engineering Society. All Rights Reserved. Delivery of immersive 3D audio with arbitrarily-positioned sound sources over headphones often requires processing of individual source signals through a set of Head-Related Transfer Functions (HRTFs). The individual morphological differences and the impracticality of HRTF measurement make it difficult to deliver completely individualized 3D audio, and instead lead to the use of previously-measured non-individual sets of HRTFs. In this study, a VR-based mobile sound localization training prototype system is introduced which uses HRTF sets for audio. It consists of a mobile phone as a head-mounted device, a hand-held Bluetooth controller, and a network-enabled laptop with a USB audio interface and a pair of headphones. The virtual environment was developed on the mobile phone such that the user can listen-to/navigate-in an acoustically neutral scene and locate invisible target sound sources presented at random directions using non-individualized HRTFs in repetitive sessions. Various training paradigms can be designed with this system, with performance-related feedback provided according to the user’s localization accuracy, including visual indication of the target location, and some aspects of a typical first-person shooting game, such as enemies, scoring, and level advancement. An experiment was conducted using this system, in which 11 subjects went through multiple training sessions, using non-individualized HRTF sets. The localization performance evaluations showed reduction of overall localization angle error over repeated training sessions, reflecting lower front-back confusion rates.

  • Conference paper
    Forte AE, Etard OE, Reichenbach JDT, 2018,

    Selective Auditory Attention At The Brainstem Level

    , ARO 2018
  • Conference paper
    Saiz Alia M, Askari A, Forte AE, Reichenbach JDTet al., 2018,

    A model of the human auditory brainstem response to running speech

    , ARO 2018
  • Conference paper
    Kegler M, Etard OE, Forte AE, Reichenbach JDTet al., 2018,

    Complex Statistical Model for Detecting the Auditory Brainstem Response to Natural Speech and for Decoding Attention from High-Density EEG Recordings

    , ARO 2018
  • Conference paper
    Lim V, Frangakis N, Molina Tanco L, Picinali Let al.,

    PLUGGY: A Pluggable Social Platform for Cultural Heritage Awareness and Participation

    , International Workshop on Analysis in Digital Cultural Heritage 2017
  • Journal article
    Etard OE, Kegler M, Braiman C, Forte AE, Reichenbach JDTet al., 2018,

    Real-time decoding of selective attention from the human auditory brainstem response to continuous speech

    , BioRxiv
  • Journal article
    Reichenbach JDT, Ciganovic N, Warren R, Keceli B, Jacon S, Fridberger Aet al., 2018,

    Static length changes of cochlear outer hair cells can tune low-frequency hearing

    , PLoS Computational Biology, Vol: 14, ISSN: 1553-734X

    The cochlea not only transduces sound-induced vibration into neural spikes, it also amplifiesweak sound to boost its detection. Actuators of this active process are sensory outer haircells in the organ of Corti, whereas the inner hair cells transduce the resulting motion intoelectric signals that propagate via the auditory nerve to the brain. However, how the outerhair cells modulate the stimulus to the inner hair cells remains unclear. Here, we combinetheoretical modeling and experimental measurements near the cochlear apex to study theway in which length changes of the outer hair cells deform the organ of Corti. We develop ageometry-based kinematic model of the apical organ of Corti that reproduces salient, yetcounter-intuitive features of the organ’s motion. Our analysis further uncovers a mechanismby which a static length change of the outer hair cells can sensitively tune the signal transmittedto the sensory inner hair cells. When the outer hair cells are in an elongated state,stimulation of inner hair cells is largely inhibited, whereas outer hair cell contraction leads toa substantial enhancement of sound-evoked motion near the hair bundles. This novel mechanismfor regulating the sensitivity of the hearing organ applies to the low frequencies thatare most important for the perception of speech and music. We suggest that the proposedmechanism might underlie frequency discrimination at low auditory frequencies, as well asour ability to selectively attend auditory signals in noisy surroundings.

  • Journal article
    Dietz M, Lestang J-H, Majdak P, Stern RM, Marquardt T, Ewert SD, Hartmann WM, Goodman DFMet al., 2017,

    A framework for testing and comparing binaural models

    , Hearing Research, Vol: 360, Pages: 92-106, ISSN: 0378-5955

    Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.

  • Journal article
    Evers C, naylor PA, 2017,

    Optimized Self-Localization for SLAM in Dynamic Scenes using Probability Hypothesis Density Filters

    , IEEE Transactions on Signal Processing, Vol: 66, Pages: 863-878, ISSN: 1053-587X

    In many applications, sensors that map the positions of objects in unknown environments are installed on dynamic platforms. As measurements are relative to the observer's sensors, scene mapping requires accurate knowledge of the observer state. However, in practice, observer reports are subject to positioning errors. Simultaneous Localization and Mapping (SLAM) addresses the joint estimation problem of observer localization and scene mapping. State-of-the-art approaches typically use visual or optical sensors and therefore rely on static beacons in the environment to anchor the observer estimate. However, many applications involving sensors that are not conventionally used for SLAM are affected by highly dynamic scenes, such that the static world assumption is invalid. This paper proposes a novel approach for dynamic scenes, called GEneralized Motion (GEM)-SLAM. Based on Probability Hypothesis Density (PHD) filters, the proposed approach probabilistically anchors the observer state by fusing observer information inferred from the scene with reports of the observer motion. This paper derives the general, theoretical framework for GEM-SLAM and shows that it generalizes existing PHD-based SLAM algorithms. Simulations for a model-specific realization using range-bearing sensors and multiple moving objects highlight that GEM-SLAM achieves significant improvements over three benchmark algorithms.

  • Conference paper
    Papayiannis C, Evers C, Naylor PA, 2017,

    Sparse parametric modeling of the early part of acoustic impulse responses

    , 25th European Signal Processing Conference (EUSIPCO), Publisher: IEEE, Pages: 678-682, ISSN: 2076-1465

    Acoustic channels are typically described by their Acoustic Impulse Response (AIR) as a Moving Average (MA) process. Such AIRs are often considered in terms of their early and late parts, describing discrete reflections and the diffuse reverberation tail respectively. We propose an approach for constructing a sparse parametric model for the early part. The model aims at reducing the number of parameters needed to represent it and subsequently reconstruct from the representation the MA coefficients that describe it. It consists of a representation of the reflections arriving at the receiver as delayed copies of an excitation signal. The Time-Of-Arrivals of reflections are not restricted to integer sample instances and a dynamically estimated model for the excitation sound is used. We also present a corresponding parameter estimation method, which is based on regularized-regression and nonlinear optimization. The proposed method also serves as an analysis tool, since estimated parameters can be used for the estimation of room geometry, the mixing time and other channel properties. Experiments involving simulated and measured AIRs are presented, in which the AIR coefficient reconstruction-error energy does not exceed 11.4% of the energy of the original AIR coefficients. The results also indicate dimensionality reduction figures exceeding 90% when compared to a MA process representation.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-t4-html.jsp Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1047&limit=20&respub-action=search.html Current Millis: 1568866630151 Current Time: Thu Sep 19 05:17:10 BST 2019