Imperial College London

DrDavidBoyle

Faculty of EngineeringDyson School of Design Engineering

Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 8172david.boyle CV

 
 
//

Location

 

Dyson BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@unpublished{Aloufi:2019,
author = {Aloufi, R and Haddadi, H and Boyle, D},
publisher = {arXiv},
title = {Emotion filtering at the edge},
url = {http://arxiv.org/abs/1909.08500v1},
year = {2019}
}

RIS format (EndNote, RefMan)

TY  - UNPB
AB - Voice controlled devices and services have become very popular in theconsumer IoT. Cloud-based speech analysis services extract information fromvoice inputs using speech recognition techniques. Services providers can thusbuild very accurate profiles of users' demographic categories, personalpreferences, emotional states, etc., and may therefore significantly compromisetheir privacy. To address this problem, we have developed a privacy-preservingintermediate layer between users and cloud services to sanitize voice inputdirectly at edge devices. We use CycleGAN-based speech conversion to removesensitive information from raw voice input signals before regeneratingneutralized signals for forwarding. We implement and evaluate our emotionfiltering approach using a relatively cheap Raspberry Pi 4, and show thatperformance accuracy is not compromised at the edge. In fact, signals generatedat the edge differ only slightly (~0.16%) from cloud-based approaches forspeech recognition. Experimental evaluation of generated signals show thatidentification of the emotional state of a speaker can be reduced by ~91%.
AU - Aloufi,R
AU - Haddadi,H
AU - Boyle,D
PB - arXiv
PY - 2019///
TI - Emotion filtering at the edge
UR - http://arxiv.org/abs/1909.08500v1
UR - https://arxiv.org/abs/1909.08500v1
UR - http://hdl.handle.net/10044/1/75404
ER -