Imperial College London

Prof. Dr. Tobias Reichenbach

Faculty of EngineeringDepartment of Bioengineering

Visiting Professor
 
 
 
//

Contact

 

+44 (0)20 7594 6370reichenbach Website

 
 
//

Location

 

4.12Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Varano:2022:10.1101/2021.12.18.471222,
author = {Varano, E and Vougioukas, K and Ma, P and Petridis, S and Pantic, M and Reichenbach, T},
doi = {10.1101/2021.12.18.471222},
journal = {Frontiers in Neuroscience},
title = {Speech-driven facial animations improve speech-in-noise comprehension of humans},
url = {http://dx.doi.org/10.1101/2021.12.18.471222},
year = {2022}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Understanding speech becomes a demanding task when the environment is noisy. Comprehension of speech in noise can be substantially improved by looking at the speaker’s face, and this audiovisual benefit is even more pronounced in people with hearing impairment. Recent advances in AI have allowed to synthesize photorealistic talking faces from a speech recording and a still image of a person’s face in an end-to-end manner. However, it has remained unknown whether such facial animations improve speech-in-noise comprehension. Here we consider facial animations produced by a recently introduced generative adversarial network (GAN), and show that humans cannot distinguish between the synthesized and the natural videos. Importantly, we then show that the end-to-end synthesized videos significantly aid humans in understanding speech in noise, although the natural facial motions yield a yet higher audiovisual benefit. We further find that an audiovisual speech recognizer (AVSR) benefits from the synthesized facial animations as well. Our results suggest that synthesizing facial motions from speech can be used to aid speech comprehension in difficult listening environments.
AU - Varano,E
AU - Vougioukas,K
AU - Ma,P
AU - Petridis,S
AU - Pantic,M
AU - Reichenbach,T
DO - 10.1101/2021.12.18.471222
PY - 2022///
SN - 1662-453X
TI - Speech-driven facial animations improve speech-in-noise comprehension of humans
T2 - Frontiers in Neuroscience
UR - http://dx.doi.org/10.1101/2021.12.18.471222
UR - https://www.frontiersin.org/articles/10.3389/fnins.2021.781196/full
UR - http://hdl.handle.net/10044/1/93894
ER -