Imperial College London

Patrick A. Naylor

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Professor of Speech & Acoustic Signal Processing
 
 
 
//

Contact

 

+44 (0)20 7594 6235p.naylor Website

 
 
//

Location

 

803Electrical EngineeringSouth Kensington Campus

//

Summary

 

Dereverberation

Reverberation is an acoustic phenomemon that we all recognize - the echos heard in a subway tunnel or in a large building. Reverberation is also present in telecommunications when speaking at a distance from the microphone of your phone or computer. The degradations to speech quality due to reverberation are noticeable as 'boxiness' or 'distance' in the sound. When coupled with environmental noise, reverberation also damages speech intelligibility. Automatic speech recognition is also severely affected by reverb.

I am current working in several projects addressing dereverberation - techniques to reduce the levels of reverberation in a speech signal. One these, known as Dereverberation and Reverberation of Audio Music and Speech (DREAMS) is constructed as a Marie Curie Initial Training Network. Together with partners in 4 EU countries, this project supports 16 researchers in a very dymanic educational environment. I am also working on the use of acoustic signal processing to improve the auditory capabilities of robots, in partnership with an EU consortium including Alderbaran Robotics. Dereverberation techniques used employ LPC residual domain processing, blind SIMO channel identification, channel shortening and adaptive multichannel system inversion.

Binaural Hearing Aids

show research

The sense of hearing degrades significantly with age and, for many people, hearing aid devices become necessaary at some point in life. When these are fitted to both ears, the devices have the potential to render spatial sound correctly if the spatial cues of level difference and time difference for sounds between the two ears are correctly preserved. Unfortunately, this is not normally the case with current hearing aids when noise reduction processing is switch on. Together with Prof Marcio Costa from Universidade Federal de Santa Catarina in Brazil, I am working on techniques that enable hearing aid users to localize sound correctly in noisy situations. This is really important if you consider daily activities like crossing a busy street - think about how difficult that would be if you were not able to determine the direction of sounds that you hear. Social situations also call for good localiztion capability in noise, as was highlighted by Prof Colin Cherry (a former Professor in this department of Imperial College) in his well known book 'On Human Communication' in which he formulated the so-called 'Cocktail Party Problem'.

Robot Audition

show research

"Why are robots deaf?" That is the question I've been asking myself since 2010 when I realized how little research there is on robot audition in comparison to the huge amount of research on robot vision (or machine vision as it might also be called). It's not actually true that robots are totally deaf since many have one or maybe two microphones but, even so, robotic audition is extremely limited especially in real-world environments including noise and reverberation, as well as multiple talkers. Could you image a robot serving you in a coffee shop where the babble noise is strong and many people are present? Don't worry if you don't like that image because it isn't going to happen soon. At least, that's what I would have said before the EARS project started. EARS is an EU programme of research and aims to address the acoustic signal processing issues in just such a situation. The project will actually target a demonstration application of a 'welcoming robot', so perhaps one day when you arrive to check in at a hotel or exhibition, you may be greated by a humanoid robot who will help you on your way using natural dialog and human-robot interations. Maybe the robot would even carry your suitcase without expecting a gratuity :-)

Collaborators

Prof. A. Sarti, Polytechnic of Milan, Environment-aware intelligent acoustic sensing, 2009

Prof Walter Kellermann, Friedrich-Alexander University Erlangen-Nurnberg, Environment-aware intelligent acoustic sensing, 2009 - 2011

Guest Lectures

Measurement and Exploitation of Reverberation in Speech Signals, Institute of Sound and Vibration Research, University of Southampton, Southampton, UK, 2017

Measurement and Exploitation of Reverberation in Speech Signals, Aalborg University, Aalborg, Denmark, 2017

Measurement and Exploitation of Reverberation in Speech Signals, Oticon A/S, Copenhagen, Denmark, 2017

Multichannel Blind Acoustic System Identification with Under-modelling, Aachen University, Aachen, Germany, 2016

Enhancement of Ambient Speech for Robot Audition, 8th Speech in Noise Workshop, Groningen, Netherlands, 2016

Audition in Robots, University LILLE1, Lille, France, 2015

Signal Processing for Robot Audition, University of York, York, UK, 2015

What’s Happening in Speech Enhancement and Acoustic Signal Processing?, UK-Speech Conference, Cambridge, 2013

Audio Signal Processing and Applications to Speech Dereverberation, Institute of Sound and Vibration Research (ISVR), University of Southampton, Southampton, UK, 2013

BBC News 'Click' - Speech Recognition, BBC, London, 2013

Acoustic Signal Processing in Noise: It‘s Not Getting Any Quieter, International Workshop on Acoustic Echo and Noise Control, Aachen Germany, 2012

Multichannel acoustic system identification and inversion for dereverberation, HP Labs, Palo Alto, USA, 2011

Speech and Audio Processing with Applications to Speech Dereverberation, RWTH Aachen University, Germany, 2011

Speech Processing in Law Enforcement Applications, Oldengurg University, Germany, 2011

Intelligibility Estimation in Law Enforcement Speech Processing, ITG Speech Communication Conference, University Bochum-Ruhr, Germany, 2011

Trends in Audio and Acoustic Signal Processing, IEEE Int. Conf. on Acoustics, Speech and Signal Processing, Prague, 2011