11 results found
Blount T, Zhao Y, Yedro F, et al., 2020, TBFY/storytelling: version 0.1
TBFY/storytelling: version 0.1
Zhao Y, Haddadi H, Skillman S, et al., 2020, Privacy-preserving Activity and Health Monitoring on Databox, 3rd ACM International Workshop on Edge Systems, Analytics and Networking (EdgeSys), Publisher: ASSOC COMPUTING MACHINERY, Pages: 49-54
Zhao Y, Wagner I, 2020, Using Metrics Suites to Improve the Measurement of Privacy in Graphs, IEEE Transactions on Dependable and Secure Computing, Pages: 1-1, ISSN: 1545-5971
Zhao Y, Wagner I, 2019, On the Strength of Privacy Metrics for Vehicular Communication, IEEE TRANSACTIONS ON MOBILE COMPUTING, Vol: 18, Pages: 390-403, ISSN: 1536-1233
Zhao Y, Wagner I, 2018, POSTER: Evaluating Privacy Metrics for Graph Anonymization and De-anonymization, ASIA CCS '18: ACM Asia Conference on Computer and Communications Security, Publisher: ACM
Zhao Y, Ye J, Henderson T, 2016, The Effect of Privacy Concerns on Privacy Recommenders, IUI'16: 21st International Conference on Intelligent User Interfaces, Publisher: ACM
Zhao Y, 2016, Usable Privacy in Location-Sharing Services, IUI'16: 21st International Conference on Intelligent User Interfaces, Publisher: ACM
Zhao Y, Ye J, Henderson T, 2016, A robust reputation-based location-privacy recommender system using opportunistic networks, The 8th EAI International Conference on Mobile Computing, Applications and Services, Publisher: ACM
Zhao Y, Ye J, Henderson T, 2014, Privacy-aware Location Privacy Preference Recommendations, 11th International Conference on Mobile and Ubiquitous Systems: Computing, Networking and Services, Publisher: ICST
Zhao Y, Liu H, Li H, et al., Semi-supervised Federated Learning for Activity Recognition
Training deep learning models on in-home IoT sensory data is commonly used torecognise human activities. Recently, federated learning systems that use edgedevices as clients to support local human activity recognition have emerged asa new paradigm to combine local (individual-level) and global (group-level)models. This approach provides better scalability and generalisability and alsooffers better privacy compared with the traditional centralised analysis andlearning models. The assumption behind federated learning, however, relies onsupervised learning on clients. This requires a large volume of labelled data,which is difficult to collect in uncontrolled IoT environments such as remotein-home monitoring. In this paper, we propose an activity recognition system that usessemi-supervised federated learning, wherein clients conduct unsupervisedlearning on autoencoders with unlabelled local data to learn generalrepresentations, and a cloud server conducts supervised learning on an activityclassifier with labelled data. Our experimental results show that using a longshort-term memory autoencoder and a Softmax classifier, the accuracy of ourproposed system is higher than that of both centralised systems andsemi-supervised federated learning using data augmentation. The accuracy isalso comparable to that of supervised federated learning systems. Meanwhile, wedemonstrate that our system can reduce the number of needed labels and the sizeof local models, and has faster local activity recognition speed thansupervised federated learning does.
Zhao Y, Barnaghi P, Haddadi H, Multimodal Federated Learning
Federated learning is proposed as an alternative to centralized machinelearning since its client-server structure provides better privacy protectionand scalability in real-world applications. In many applications, such as smarthomes with IoT devices, local data on clients are generated from differentmodalities such as sensory, visual, and audio data. Existing federated learningsystems only work on local data from a single modality, which limits thescalability of the systems. In this paper, we propose a multimodal and semi-supervised federated learningframework that trains autoencoders to extract shared or correlatedrepresentations from different local data modalities on clients. In addition,we propose a multimodal FedAvg algorithm to aggregate local autoencoderstrained on different data modalities. We use the learned global autoencoder fora downstream classification task with the help of auxiliary labelled data onthe server. We empirically evaluate our framework on different modalitiesincluding sensory data, depth camera videos, and RGB camera videos. Ourexperimental results demonstrate that introducing data from multiple modalitiesinto federated learning can improve its accuracy. In addition, we can uselabelled data from only one modality for supervised learning on the server andapply the learned model to testing data from other modalities to achieve decentaccuracy (e.g., approximately 70% as the best performance), especially whencombining contributions from both unimodal clients and multimodal clients.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.