- Showing results for:
- Reset all filters
Conference paperMcKnight S, Hogg A, Neo V, et al., 2021,
A study of salient modulation domain features for speaker identification, Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Publisher: IEEE
This paper studies the ranges of acoustic andmodulation frequencies of speech most relevant for identifyingspeakers and compares the speaker-specific information presentin the temporal envelope against that present in the temporalfine structure. This study uses correlation and feature importancemeasures, random forest and convolutional neural network mod-els, and reconstructed speech signals with specific acoustic and/ormodulation frequencies removed to identify the salient points. Itis shown that the range of modulation frequencies associated withthe fundamental frequency is more important than the 1-16 Hzrange most commonly used in automatic speech recognition, andthat the 0 Hz modulation frequency band contains significantspeaker information. It is also shown that the temporal envelopeis more discriminative among speakers than the temporal finestructure, but that the temporal fine structure still contains usefuladditional information for speaker identification. This researchaims to provide a timely addition to the literature by identifyingspecific aspects of speech relevant for speaker identification thatcould be used to enhance the discriminant capabilities of machinelearning models.
Conference paperHogg A, Neo V, Weiss S, et al., 2021,
A polynomial eigenvalue decomposition MUSIC approach for broadband sound source localization, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Publisher: IEEE
Direction of arrival (DoA) estimation for sound source localization is increasingly prevalent in modern devices. In this paper, we explore a polynomial extension to the multiple signal classification (MUSIC) algorithm, spatio-spectral polynomial (SSP)-MUSIC, and evaluate its performance when using speech sound sources. In addition, we also propose three essential enhancements for SSP-MUSIC to work with noisy reverberant audio data. This paper includes an analysis of SSP-MUSIC using speech signals in a simulated room for different noise and reverberation conditions and the first task of the LOCATA challenge. We show that SSP-MUSIC is more robust to noise and reverberation compared to independent frequency bin (IFB) approaches and improvements can be seen for single sound source localization at signal-to-noise ratios (SNRs) below 5 dB and reverberation times (T60s) larger than 0.7 s.
Conference paperJones DT, Sharma D, Kruchinin SY, et al., 2021,
Spatial Coding for Microphone Arrays using IPNLMS-Based RTF Estimation, 2021 IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA)
Conference paperNeo V, Evers C, Naylor P, 2021,
Polynomial matrix eigenvalue decomposition-based source separation using informed spherical microphone arrays, IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA), Publisher: IEEE
Audio source separation is essential for many applications such as hearing aids, telecommunications, and robot audition. Subspace decomposition approaches using polynomial matrix eigenvalue decomposition (PEVD) algorithms applied to the microphone signals, or lower-dimension eigenbeams for spherical microphone arrays, are effective for speech enhancement. In this work, we extend the work from speech enhancement and propose a PEVD subspace algorithm that uses eigenbeams for source separation. The proposed PEVD-based source separation approach performs comparably with state-of-the-art algorithms, such as those based on independent component analysis (ICA) and multi-channel non-negative matrix factorization (MNMF). Informal listening examples also indicate that our method does not introduce any audible artifacts.
Conference paperHogg AOT, Evers C, Naylor PA, 2021,
Conference paperMoore A, Vos R, Naylor P, et al., 2021,
Processing pipelines for efficient, physically-accurate simulation of microphone array signals in dynamic sound scenes, ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing, Publisher: IEEE, ISSN: 0736-7791
Multichannel acoustic signal processing is predicated on the fact that the inter channel relationships between the received signals can be exploited to infer information about the acoustic scene. Recently there has been increasing interest in algorithms which are applicable in dynamic scenes, where the source(s) and/or microphone array may be moving. Simulating such scenes has particular challenges which are exacerbated when real-time, listener-in-the-loop evaluation of algorithms is required. This paper considers candidate pipelines for simulating the array response to a set of point/image sources in terms of their accuracy, scalability and continuity. Anew approach, in which the filter kernels are obtained using principal component analysis from time-aligned impulse responses, is proposed. When the number of filter kernels is≤40the new approach achieves more accurate simulation than competing methods.
Conference paperD'Olne E, Moore A, Naylor P, 2021,
Beamforming techniques for hearing aid applications are often evaluated using behind-the-ear (BTE) devices. However, the growing number of wearable devices with microphones has made it possible to consider new geometries for microphone array beamforming. In this paper, we examine the effect of array location and geometry on the performance of binaural minimum power distortionless response (BMPDR) beamformers. In addition to the classical adaptive BMPDR, we evaluate the benefit of a recently-proposed method that estimates the sample covariance matrix using a compact model. Simulation results show that using a chest-mounted array reduces noise by an additional 1.3~dB compared to BTE hearing aids. The compact model method is found to yield higher predicted intelligibility than adaptive BMPDR beamforming, regardless of the array geometry.
Journal articleHogg A, Evers C, Moore A, et al., 2021,
This paper demonstrates how the harmonic structure of voiced speech can be exploited to segment multiple overlapping speakers in a speaker diarization task. We explore how a change in the speaker can be inferred from a change in pitch. We show that voiced harmonics can be useful in detecting when more than one speaker is talking, such as during overlapping speaker activity. A novel system is proposed to track multiple harmonics simultaneously, allowing for the determination of onsets and end-points of a speaker’s utterance in the presence of an additional active speaker. This system is bench-marked against a segmentation system from the literature that employs a bidirectional long short term memory network (BLSTM) approach and requires training. Experimental results highlight that the proposed approach outperforms the BLSTM baseline approach by 12.9% in terms of HIT rate for speaker segmentation. We also show that the estimated pitch tracks of our system can be used as features to the BLSTM to achieve further improvements of 1.21% in terms of coverage and 2.45% in terms of purity
Journal articleYiallourides C, Naylor PA, 2021,
Time-frequency analysis and parameterisation of knee sounds fornon-invasive setection of osteoarthritis, IEEE Transactions on Biomedical Engineering, Vol: 68, Pages: 1250-1261, ISSN: 0018-9294
Objective: In this work the potential of non-invasive detection of kneeosteoarthritis is investigated using the sounds generated by the knee jointduring walking. Methods: The information contained in the time-frequency domainof these signals and its compressed representations is exploited and theirdiscriminant properties are studied. Their efficacy for the task of normal vsabnormal signal classification is evaluated using a comprehensive experimentalframework. Based on this, the impact of the feature extraction parameters onthe classification performance is investigated using Classification andRegression Trees (CART), Linear Discriminant Analysis (LDA) and Support VectorMachine (SVM) classifiers. Results: It is shown that classification issuccessful with an area under the Receiver Operating Characteristic (ROC) curveof 0.92. Conclusion: The analysis indicates improvements in classificationperformance when using non-uniform frequency scaling and identifies specificfrequency bands that contain discriminative features. Significance: Contrary toother studies that focus on sit-to-stand movements and knee flexion/extension,this study used knee sounds obtained during walking. The analysis of suchsignals leads to non-invasive detection of knee osteoarthritis with highaccuracy and could potentially extend the range of available tools for theassessment of the disease as a more practical and cost effective method withoutrequiring clinical setups.
Journal articleHafezi S, Moore A, Naylor P, 2021,
A conventional approach to wideband multi-source (MS) direction-of-arrival (DOA) estimation is to perform single source (SS) DOA estimation in time-frequency (TF) bins for which a SS assumption is valid. Such methods use the W-disjoint orthogonality (WDO) assumption due to the speech sparseness. As the number of sources increases, the chance of violating the WDO assumption increases. As shown in the challenging scenarios with multiple simultaneously active sources over a short period of time masking each other, it is possible for a strongly masked source (due to inconsistency of activity or quietness) to be rarely dominant in a TF bin. SS-based DOA estimators fail in the detection or accurate localization of masked sources in such scenarios. Two analytical approaches are proposed for narrowband DOA estimation based on the MS assumption in a bin in the spherical harmonic domain. In the first approach, eigenvalue decomposition is used to decompose a MS scenario into multiple SS scenarios, and a SS-based analytical DOA estimation is performed on each. The second approach analytically estimates two DOAs per bin assuming the presence of two active sources per bin. The evaluation validates the improvement to double accuracy and robustness to sensor noise compared to the baseline methods.
Conference paperNeo VW, Evers C, Naylor PA, 2021,
Polynomial matrix eigenvalue decomposition of spherical harmonics for speech enhancement, IEEE International Conference on Acoustics, Speech and Signal Processing, Publisher: IEEE
Speech enhancement algorithms using polynomial matrix eigen value decomposition (PEVD) have been shown to be effective for noisy and reverberant speech. However, these algorithms do not scale well in complexity with the number of channels used in the processing. For a spherical microphone array sampling an order-limited sound field, the spherical harmonics provide a compact representation of the microphone signals in the form of eigen beams. We propose a PEVD algorithm that uses only the lower dimension eigen beams for speech enhancement at a significantly lower computation cost. The proposed algorithm is shown to significantly reduce complexity while maintaining full performance. Informal listening examples have also indicated that the processing does not introduce any noticeable artefacts.
Conference paperSharma D, Berger L, Quillen C, et al., 2021,
We present a novel, non-intrusive method that jointly estimates acoustic signal properties associated with the perceptual speech quality, level of reverberation and noise in a speech signal. We explore various machine learning frameworks, consisting of popular feature extraction front-ends and two types of regression models and show the trade-off in performance that must be considered with each combination. We show that a short-time framework consisting of an 80-dimension log-Mel filter bank feature front-end employing spectral augmentation, followed by a 3 layer LSTM recurrent neural network model achieves a mean absolute error of 3.3 dB for C50, 2.3 dB for segmental SNR and 0.3 for PESQ estimation on the Libri Augmented (LA) database. The internal VAD for this system achieves an F1 score of 0.93 on this data. The proposed system also achieves a 2.4 dB mean absolute error for C50 estimation on the ACE test set. Furthermore, we show how each type of acoustic parameter correlates with ASR performance in terms of ground truth labels and additionally show that the estimated C50, SNR and PESQ from our proposed method have a high correlation (greater than 0.92) with WER on the LA test set.
Conference paperFelsheim RC, Brendel A, Naylor PA, et al., 2021,
Head Orientation Estimation from Multiple Microphone Arrays, 28th European Signal Processing Conference (EUSIPCO), Publisher: IEEE, Pages: 491-495, ISSN: 2076-1465
Conference paperMcKnight SW, Hogg A, Naylor P, 2020,
Evaluation of speaker segmentation and diarization normally makes use of forgiveness collars around ground truth speaker segment boundaries such that estimated speaker segment boundaries with such collars are considered completely correct. This paper shows that the popular recent approach of removing forgiveness collars from speaker diarization evaluation tools can unfairly penalize speaker diarization systems that correctly estimate speaker segment boundaries. The uncertainty in identifying the start and/or end of a particular phoneme means that the ground truth segmentation is not perfectly accurate, and even trained human listeners are unable to identify phoneme boundaries with full consistency. This research analyses the phoneme dependence of this uncertainty, and shows that it depends on (i) whether the phoneme being detected is at the start or end of an utterance and (ii) what the phoneme is, so that the use of a uniform forgiveness collar is inadequate. This analysis is expected to point the way towards more indicative and repeatable assessment of the performance of speaker diarization systems.
Conference paperNeo VW, Evers C, Naylor PA, 2021,
The degradation of speech arising from additive background noise and reverberation affects the performance of important speech applications such as telecommunications, hearing aids, voice-controlled systems and robot audition. In this work, we focus on dereverberation. It is shown that the parameterized polynomial matrix eigenvalue decomposition (PEVD)-based speech enhancement algorithm exploits the lack of correlation between speech and the late reflections to enhance the speech component associated with the direct path and early reflections. The algorithm's performance is evaluated using simulations involving measured acoustic impulse responses and noise from the ACE corpus. The simulations and informal listening examples have indicated that the PEVD-based algorithm performs dereverberation over a range of SNRs without introducing any noticeable processing artefacts.
Journal articleXue W, Moore A, Brookes D, et al., 2020,
Recently we presented a modulation-domain multichannel Kalman filtering (MKF) algorithm for speech enhancement, which jointly exploits the inter-frame modulation-domain temporal evolution of speech and the inter-channel spatial correlation to estimate the clean speech signal. The goal of speech enhancement is to suppress noise while keeping the speech undistorted, and a key problem is to achieve the best trade-off between speech distortion and noise reduction. In this paper, we extend the MKF by presenting a modulation-domain parametric MKF (PMKF) which includes a parameter that enables flexible control of the speech enhancement behaviour in each time-frequency (TF) bin. Based on the decomposition of the MKF cost function, a new cost function for PMKF is proposed, which uses the controlling parameter to weight the noise reduction and speech distortion terms. An optimal PMKF gain is derived using a minimum mean squared error (MMSE) criterion. We analyse the performance of the proposed MKF, and show its relationship to the speech distortion weighted multichannel Wiener filter (SDW-MWF). To evaluate the impact of the controlling parameter on speech enhancement performance, we further propose PMKF speech enhancement systems in which the controlling parameter is adaptively chosen in each TF bin. Experiments on a publicly available head-related impulse response (HRIR) database in different noisy and reverberant conditions demonstrate the effectiveness of the proposed method.
Conference paperMartínez-Colón A, Viciana-Abad R, Perez-Lorenzo JM, et al., 2020,
In the field of social human-robot interaction, and in particular for social assistive robotics, the capacity of recognizing the speaker’s discourse in very diverse conditions and where more than one interlocutor may be present, plays an essential role. The use of a mics. array that can be mounted in a robot supported by a voice enhancement module has been evaluated, with the goal of improving the performance of current automatic speech recognition (ASR) systems in multi-speaker conditions. An evaluation has been made of the improvement in terms of intelligibility scores that can be achieved in the operation of two off-the-shelf ASR solutions in situations that contemplate the typical scenarios where a robot with these characteristics can be found. The results have identified the conditions in which a low computational cost demand algorithm can be beneficial to improve intelligibility scores in real environments.
Journal articlePapayiannis C, Evers C, Naylor P, 2020,
Reverberation is present in our workplaces, ourhomes, concert halls and theatres. This paper investigates howdeep learning can use the effect of reverberation on speechto classify a recording in terms of the room in which it wasrecorded. Existing approaches in the literature rely on domainexpertise to manually select acoustic parameters as inputs toclassifiers. Estimation of these parameters from reverberantspeech is adversely affected by estimation errors, impacting theclassification accuracy. In order to overcome the limitations ofpreviously proposed methods, this paper shows how DNNs canperform the classification by operating directly on reverberantspeech spectra and a CRNN with an attention-mechanism isproposed for the task. The relationship is investigated betweenthe reverberant speech representations learned by the DNNs andacoustic parameters. For evaluation, AIRs are used from theACE-challenge dataset that were measured in 7 real rooms. Theclassification accuracy of the CRNN classifier in the experimentsis 78% when using 5 hours of training data and 90% when using10 hours.
Conference paperNeo VW, Evers C, Naylor PA, 2020,
The enhancement of noisy speech is important for applications involving human-to-human interactions, such as telecommunications and hearing aids, as well as human-to-machine interactions, such as voice-controlled systems and robot audition. In this work, we focus on reverberant environments. It is shown that, by exploiting the lack of correlation between speech and the late reflections, further noise reduction can be achieved. This is verified using simulations involving actual acoustic impulse responses and noise from the ACE corpus. The simulations show that even without using a noise estimator, our proposed method simultaneously achieves noise reduction, and enhancement of speech quality and intelligibility, in reverberant environments over a wide range of SNRs. Furthermore, informal listening examples highlight that our approach does not introduce any significant processing artefacts such as musical noise.
Journal articleEvers C, Lollmann HW, Mellmann H, et al., 2020,
The ability to localize and track acoustic events is a fundamental prerequisite for equipping machines with the ability to be aware of and engage with humans in their surrounding environment. However, in realistic scenarios, audio signals are adversely affected by reverberation, noise, interference, and periods of speech inactivity. In dynamic scenarios, where the sources and microphone platforms may be moving, the signals are additionally affected by variations in the source-sensor geometries. In practice, approaches to sound source localization and tracking are often impeded by missing estimates of active sources, estimation errors, as well as false estimates. The aim of the LOCAlization and TrAcking (LOCATA) Challenge is an openaccess framework for the objective evaluation and benchmarking of broad classes of algorithms for sound source localization and tracking. This paper provides a review of relevant localization and tracking algorithms and, within the context of the existing literature, a detailed evaluation and dissemination of the LOCATA submissions. The evaluation highlights achievements in the field, open challenges, and identifies potential future directions.Index Terms—Acoustic signal processing, Source localization, Source tracking, Reverberation.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.