Citation

BibTex format

@unpublished{Aloufi:2019,
author = {Aloufi, R and Haddadi, H and Boyle, D},
publisher = {arXiv},
title = {Emotionless: privacy-preserving speech analysis for voice assistants},
url = {http://arxiv.org/abs/1908.03632v1},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Voice-enabled interactions provide more human-like experiences in manypopular IoT systems. Cloud-based speech analysis services extract usefulinformation from voice input using speech recognition techniques. The voicesignal is a rich resource that discloses several possible states of a speaker,such as emotional state, confidence and stress levels, physical condition, age,gender, and personal traits. Service providers can build a very accurateprofile of a user's demographic category, personal preferences, and maycompromise privacy. To address this problem, a privacy-preserving intermediatelayer between users and cloud services is proposed to sanitize the voice input.It aims to maintain utility while preserving user privacy. It achieves this bycollecting real time speech data and analyzes the signal to ensure privacyprotection prior to sharing of this data with services providers. Precisely,the sensitive representations are extracted from the raw signal by usingtransformation functions and then wrapped it via voice conversion technology.Experimental evaluation based on emotion recognition to assess the efficacy ofthe proposed method shows that identification of sensitive emotional state ofthe speaker is reduced by ~96 %.
AU - Aloufi,R
AU - Haddadi,H
AU - Boyle,D
PB - arXiv
PY - 2019///
TI - Emotionless: privacy-preserving speech analysis for voice assistants
UR - http://arxiv.org/abs/1908.03632v1
ER -

Contact us

Dyson School of Design Engineering
Imperial College London
25 Exhibition Road
South Kensington
London
SW7 2DB

design.engineering@imperial.ac.uk
Tel: +44 (0) 20 7594 8888

Campus Map