79 results found
Aloufi R, Haddadi H, Boyle D, 2019, Emotion filtering at the edge, Publisher: arXiv
Voice controlled devices and services have become very popular in theconsumer IoT. Cloud-based speech analysis services extract information fromvoice inputs using speech recognition techniques. Services providers can thusbuild very accurate profiles of users' demographic categories, personalpreferences, emotional states, etc., and may therefore significantly compromisetheir privacy. To address this problem, we have developed a privacy-preservingintermediate layer between users and cloud services to sanitize voice inputdirectly at edge devices. We use CycleGAN-based speech conversion to removesensitive information from raw voice input signals before regeneratingneutralized signals for forwarding. We implement and evaluate our emotionfiltering approach using a relatively cheap Raspberry Pi 4, and show thatperformance accuracy is not compromised at the edge. In fact, signals generatedat the edge differ only slightly (~0.16%) from cloud-based approaches forspeech recognition. Experimental evaluation of generated signals show thatidentification of the emotional state of a speaker can be reduced by ~91%.
Aloufi R, Haddadi H, Boyle D, 2019, Emotionless: privacy-preserving speech analysis for voice assistants, Publisher: arXiv
Voice-enabled interactions provide more human-like experiences in manypopular IoT systems. Cloud-based speech analysis services extract usefulinformation from voice input using speech recognition techniques. The voicesignal is a rich resource that discloses several possible states of a speaker,such as emotional state, confidence and stress levels, physical condition, age,gender, and personal traits. Service providers can build a very accurateprofile of a user's demographic category, personal preferences, and maycompromise privacy. To address this problem, a privacy-preserving intermediatelayer between users and cloud services is proposed to sanitize the voice input.It aims to maintain utility while preserving user privacy. It achieves this bycollecting real time speech data and analyzes the signal to ensure privacyprotection prior to sharing of this data with services providers. Precisely,the sensitive representations are extracted from the raw signal by usingtransformation functions and then wrapped it via voice conversion technology.Experimental evaluation based on emotion recognition to assess the efficacy ofthe proposed method shows that identification of sensitive emotional state ofthe speaker is reduced by ~96 %.
Moore J, Arcia-Moret A, Yadav P, et al., 2019, Zest: REST over ZeroMQ, 2019 IEEE INTERNATIONAL CONFERENCE ON PERVASIVE COMPUTING AND COMMUNICATIONS WORKSHOPS (PERCOM WORKSHOPS), Pages: 1015-1019, ISSN: 2474-2503
Malekzadeh M, Clegg RG, Cavallaro A, et al., Mobile Sensor Data Anonymization, ACM/IEEE International Conference on Internet of Things Design and Implementation (IoTDI 2019)
Data from motion sensors such as accelerometers and gyroscopes embedded inour devices can reveal secondary undesired, private information about ouractivities. This information can be used for malicious purposes such as useridentification by application developers. To address this problem, we propose adata transformation mechanism that enables a device to share data for specificapplications (e.g.~monitoring their daily activities) without revealing privateuser information (e.g.~ user identity). We formulate this anonymization processbased on an information theoretic approach and propose a new multi-objectiveloss function for training convolutional auto-encoders~(CAEs) to provide apractical approximation to our anonymization problem. This effective lossfunction forces the transformed data to minimize the information about theuser's identity, as well as the data distortion to preserveapplication-specific utility. Our training process regulates the encoder todisregard user-identifiable patterns and tunes the decoder to shape the finaloutput independently of users in the training set. Then, a trained CAE can bedeployed on a user's mobile device to anonymize sensor data before sharing withan app, even for users who are not included in the training dataset. Theresults, on a dataset of 24 users for activity recognition, show a promisingtrade-off on transformed data between utility and privacy, with an accuracy foractivity recognition over 92%, while reducing the chance of identifying a userto less than 7%.
Osia SA, Rassouli B, Haddadi H, et al., 2019, Privacy Against Brute-Force Inference Attacks, Publisher: IEEE
Zhang C, Patras P, Haddadi H, 2019, Deep Learning in Mobile and Wireless Networking: A Survey, IEEE COMMUNICATIONS SURVEYS AND TUTORIALS, Vol: 21, Pages: 2224-2287
Osia SA, Taheri A, Shamsabadi AS, et al., Deep Private-Feature Extraction, IEEE Transactions on Knowledge and Data Engineering, ISSN: 1041-4347
We present and evaluate Deep Private-Feature Extractor (DPFE), a deep modelwhich is trained and evaluated based on information theoretic constraints.Using the selective exchange of information between a user's device and aservice provider, DPFE enables the user to prevent certain sensitiveinformation from being shared with a service provider, while allowing them toextract approved information using their model. We introduce and utilize thelog-rank privacy, a novel measure to assess the effectiveness of DPFE inremoving sensitive information and compare different models based on theiraccuracy-privacy tradeoff. We then implement and evaluate the performance ofDPFE on smartphones to understand its complexity, resource demands, andefficiency tradeoffs. Our results on benchmark image datasets demonstrate thatunder moderate resource utilization, DPFE can achieve high accuracy for primarytasks while preserving the privacy of sensitive features.
Servia-Rodriguez S, Wang L, Zhao JR, et al., 2018, Privacy-preserving personal model training, Proceedings - ACM/IEEE International Conference on Internet of Things Design and Implementation, IoTDI 2018, Pages: 153-164
© 2018 IEEE. Many current Internet services rely on inferences from models trained on user data. Commonly, both the training and inference tasks are carried out using cloud resources fed by personal data collected at scale from users. Holding and using such large collections of personal data in the cloud creates privacy risks to the data subjects, but is currently required for users to benefit from such services. We explore how to provide for model training and inference in a system where computation is pushed to the data in preference to moving data to the cloud, obviating many current privacy risks. Specifically, we take an initial model learnt from a small set of users and retrain it locally using data from a single user. We evaluate on two tasks: one supervised learning task, using a neural network to recognise users' current activity from accelerometer traces; and one unsupervised learning task, identifying topics in a large set of documents. In both cases the accuracy is improved. We also analyse the robustness of our approach against adversarial attacks, as well as its feasibility by presenting a performance evaluation on a representative resource-constrained device (a Raspberry Pi).
Osia SA, Shamsabadi AS, Taheri A, et al., 2018, Private and scalable personal data analytics using hybrid edge-to-cloud deep learning, Computer, Vol: 51, Pages: 42-49, ISSN: 0018-9162
Although the ability to collect, collate, and analyze the vast amount of data generated from cyber-physical systems and Internet of Things devices can be beneficial to both users and industry, this process has led to a number of challenges, including privacy and scalability issues. The authors present a hybrid framework where user-centered edge devices and resources can complement the cloud for providing privacy-aware, accurate, and efficient analytics.
Hänsel K, Poguntke R, Haddadi H, et al., 2018, What to put on the user: Sensing technologies for studies and physiology aware systems, ACM Conference on Human Factors in Computing Systems (ACM CHI’18), Publisher: ACM
Fitness trackers not just provide easy means to acquire physiological data in real-world environments due to affordable sensing technologies, they further offer opportunities for physiology-aware applications and studies in HCI; however, their performance is not well understood. In this paper, we report findings on the quality of 3 sensing technologies: PPG-based wrist trackers (Apple Watch, Microsoft Band 2), an ECG-belt (Polar H7) and reference device with stick-on ECG electrodes (Nexus 10). We collected physiological (heart rate, electrodermal activity, skin temperature) and subjective data from 21 participants performing combinations of physical activity and stressful tasks. Our empirical research indicates that wrist devices provide a good sensing performance in stationary settings. However, they lack accuracy when participants are mobile or if tasks require physical activity. Based on our findings, we suggest a textitDesign Space for Wearables in Research Settings and reflected on the appropriateness of the investigated technologies in research contexts.
Chamberlain A, Crabtree A, Haddadi H, et al., 2018, Special theme on privacy and the Internet of things, PERSONAL AND UBIQUITOUS COMPUTING, Vol: 22, Pages: 289-292, ISSN: 1617-4909
Crabtree A, Lodge T, Colley J, et al., 2018, Building accountability into the Internet of Things: the IoT Databox model, Journal of Reliable Intelligent Environments, Vol: 4, Pages: 39-55, ISSN: 2199-4668
This paper outlines the IoT Databox model as a means of making the Internet of Things (IoT) accountable to individuals. Accountability is a key to building consumer trust and is mandated by the European Union’s general data protection regulation (GDPR). We focus here on the ‘external’ data subject accountability requirement specified by GDPR and how meeting this requirement turns on surfacing the invisible actions and interactions of connected devices and the social arrangements in which they are embedded. The IoT Databox model is proposed as an in principle means of enabling accountability and providing individuals with the mechanisms needed to build trust into the IoT.
Malekzadeh M, Clegg RG, Haddadi H, 2018, Replacement AutoEncoder: A Privacy-Preserving Algorithm for Sensory Data Analysis, 2018 IEEE/ACM Third International Conference on Internet-of-Things Design and Implementation (IoTDI)
Malekzadeh M, Clegg RG, Cavallaro A, et al., Protecting sensory data against sensitive inferences, Workshop on Privacy by Design in Distributed Systems 2018
There is growing concern about how personal data are used when users grant applications direct access to the sensors in their mobile devices. For example,time-series data generated by motion sensors reflect directly users' activitiesand indirectly their personalities. It is therefore important to designprivacy-preserving data analysis methods that can run on mobile devices. Inthis paper, we propose a feature learning architecture that can be deployed indistributed environments to provide flexible and negotiable privacy-preservingdata transmission. It should be flexible because the internal architecture ofeach component can be independently changed according to users or serviceproviders needs. It is negotiable because expected privacy and utility can benegotiated based on the requirements of the data subject and underlyingapplication. For the specific use-case of activity recognition, we conductedexperiments on two real-world datasets of smartphone's motion sensors, one ofthem is collected by the authors and will be publicly available by this paperfor the first time. Results indicate the proposed framework establishes a goodtrade-off between application's utility and data subjects' privacy. We showthat it maintains the usefulness of the transformed data for activityrecognition (with around an average loss of three percentage points) whilealmost eliminating the possibility of gender classification (from more than90\% to around 50\%, the target random guess). These results also haveimplication for moving from the current binary setting of granting permissionto mobile apps or not, toward a situation where users can grant eachapplication permission over a limited range of inferences according to theprovided services.
Shamsabadi AS, Haddadi H, Cavallaro A, 2018, DISTRIBUTED ONE-CLASS LEARNING, Publisher: IEEE
Katevas K, Tokarchuk L, Haddadi H, et al., 2017, Detecting Group Formations using iBeacon Technology, 15th ACM Annual International Conference on Mobile Systems, Applications, and Services (MobiSys), Publisher: ASSOC COMPUTING MACHINERY, Pages: 190-190
Hansel K, Haddadi H, Alomainy A, 2017, AWSense - A Framework for Collecting Sensing Data from the Apple Watch, 15th ACM Annual International Conference on Mobile Systems, Applications, and Services (MobiSys), Publisher: ASSOC COMPUTING MACHINERY, Pages: 188-188
Perera C, Wakenshaw SYL, Baarslag T, et al., 2017, Valorising the IoT Databox: creating value for everyone, TRANSACTIONS ON EMERGING TELECOMMUNICATIONS TECHNOLOGIES, Vol: 28, ISSN: 2161-3915
Crabtree A, Lodge T, Colley J, et al., 2016, Enabling the new economic actor: data protection, the digital economy, and the Databox, Personal and Ubiquitous Computing, Vol: 20, Pages: 947-957, ISSN: 0949-2054
This paper offers a sociological perspective on data protection regulation and its relevance to design. From this perspective, proposed regulation in Europe and the USA seeks to create a new economic actor—the consumer as personal data trader—through new legal frameworks that shift the locus of agency and control in data processing towards the individual consumer or “data subject”. The sociological perspective on proposed data regulation recognises the reflexive relationship between law and the social order, and the commensurate needs to balance the demand for compliance with the design of computational tools that enable this new economic actor. We present the Databox model as a means of providing data protection and allowing the individual to exploit personal data to become an active player in the emerging data economy.
Naderi PT, Malazi HT, Ghassemian M, et al., 2016, Quality of Claim Metrics in Social Sensing Systems: A case study on IranDeal, 6th International Conference on Computer and Knowledge Engineering (ICCKE), Publisher: IEEE, Pages: 129-135
Tyson G, Perta VC, Haddadi H, et al., 2016, A First Look at User Activity on Tinder, 8th IEEE/ACM International Conference on Advances in Social Networks Analysis and Mining (ASONAM), Publisher: IEEE, Pages: 461-466
Katevas K, Haddadi H, Tokarchuk L, 2016, SensingKit: Evaluating the Sensor Power Consumption in iOS devices, 12th International Conference on Intelligent Environments (IE), Publisher: IEEE, Pages: 222-225, ISSN: 2469-8792
Rich J, Haddadi H, Hospedales TM, 2016, Towards Bottom-Up Analysis of Social Food, 6th International Conference on Digital Health (DH), Publisher: ASSOC COMPUTING MACHINERY, Pages: 111-120
Amar Y, Haddadi H, Mortier R, 2016, Privacy-Aware Infrastructure for Managing Personal Data Personal Data Arbitering within the Databox Framework, ACM Conference on Special Interest Group on Data Communication (SIGCOMM), Publisher: ASSOC COMPUTING MACHINERY, Pages: 571-572
Fard MA, Haddadi H, Targhi AT, 2016, Fruits and Vegetables Calorie Counter Using Convolutional Neural Networks, 6th International Conference on Digital Health (DH), Publisher: ASSOC COMPUTING MACHINERY, Pages: 121-122
Katevas K, Haddadi H, Tokarchuk L, et al., 2016, Detecting Group Formations using iBeacon Technology, ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp) / 20th ACM International Symposium on Wearable Computers (ISWC), Publisher: ASSOC COMPUTING MACHINERY, Pages: 742-752
Cunha TO, Weber I, Haddadi H, et al., 2016, The Effect of Social Feedback in a Reddit Weight Loss Community, 6th International Conference on Digital Health (DH), Publisher: ASSOC COMPUTING MACHINERY, Pages: 99-103
Hansel K, Alomainy A, Haddadi H, 2016, Large Scale Mood and Stress Self-Assessments on a Smartwatch, ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp) / 20th ACM International Symposium on Wearable Computers (ISWC), Publisher: ASSOC COMPUTING MACHINERY, Pages: 1180-1184
Haddadi H, Perta V, 2015, A Glance through the VPN Looking Glass: IPv6 Leakage and DNS Hijacking in Commercial VPN clients, The 15th Privacy Enhancing Technologies Symposium (PETS 2015), Publisher: De Gruyter, Pages: 77-91, ISSN: 2299-0984
Commercial Virtual Private Network (VPN) services have become a popular and convenient technology for users seeking privacy and anonymity. They have been applied to a wide range of use cases, with commercial providers often making bold claims regarding their ability to fulfil each of these needs, e.g., censorship circumvention, anonymity and protection from monitoring and tracking. However, as of yet, the claims made by these providers have not received a sufficiently detailed scrutiny. This paper thus investigates the claims of privacy and anonymity in commercial VPN services. We analyse 14 of the most popular ones, inspecting their internals and their infrastructures. Despite being a known issue, our experimental study reveals that the majority of VPN services suffer from IPv6 traffic leakage. The work is extended by developing more sophisticated DNS hijacking attacks that allow all traffic to be transparently captured.We conclude discussing a range of best practices and countermeasures that can address these vulnerabilities
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.