Publications
Results
- Showing results for:
- Reset all filters
Search results
-
Journal articleBethlehem RAI, Seidlitz J, White SR, et al., 2022,
Publisher Correction: Brain charts for the human lifespan
, Nature, Vol: 610, Pages: E6-E6, ISSN: 0028-0836 -
Conference paperSoreq E, Kolanko M, Guruswamy Ravindran KK, et al., 2022,
Longitudinal assessment of sleep/wake behaviour in dementia patients living at home
, Association-of-British-Neurologists (ABN) Annual Meeting, Publisher: BMJ PUBLISHING GROUP, ISSN: 0022-3050 -
Conference paperZhao Y, Barnaghi P, Haddadi H, 2022,
Multimodal federated learning on IoT data
, 2022 IEEE/ACM Seventh International Conference on Internet-of-Things Design and Implementation (IoTDI), Publisher: IEEEFederated learning is proposed as an alternative to centralized machine learning since its client-server structure provides better privacy protection and scalability in real-world applications. In many applications, such as smart homes with Internet-of-Things (IoT) devices, local data on clients are generated from different modalities such as sensory, visual, and audio data. Existing federated learning systems only work on local data from a single modality, which limits the scalability of the systems. In this paper, we propose a multimodal and semi-supervised federated learning framework that trains autoencoders to extract shared or correlated representations from different local data modalities on clients. In addition, we propose a multimodal FedAvg algorithm to aggregate local autoencoders trained on different data modalities. We use the learned global autoencoder for a downstream classification task with the help of auxiliary labelled data on the server. We empirically evaluate our framework on different modalities including sensory data, depth camera videos, and RGB camera videos. Our experimental results demonstrate that introducing data from multiple modalities into federated learning can improve its classification performance. In addition, we can use labelled data from only one modality for supervised learning on the server and apply the learned model to testing data from other modalities to achieve decent F1 scores (e.g., with the best performance being higher than 60%), especially when combining contributions from both unimodal clients and multimodal clients.
-
Journal articleWu Y, Pan Y, Barnaghi P, et al., 2022,
Editorial: Big data technologies and applications
, WIRELESS NETWORKS, Vol: 28, Pages: 1163-1167, ISSN: 1022-0038 -
Journal articleWairagkar M, Lima MR, Bazo D, et al., 2022,
Emotive response to a hybrid-face robot and translation to consumer social robots
, IEEE Internet of Things Journal, Vol: 9, Pages: 3174-3188, ISSN: 2327-4662We present the conceptual formulation, design, fabrication, control and commercial translation of an IoT enabled social robot as mapped through validation of human emotional response to its affective interactions. The robot design centres on a humanoid hybrid-face that integrates a rigid faceplate with a digital display to simplify conveyance of complex facial movements while providing the impression of three-dimensional depth. We map the emotions of the robot to specific facial feature parameters, characterise recognisability of archetypical facial expressions, and introduce pupil dilation as an additional degree of freedom for emotion conveyance. Human interaction experiments demonstrate the ability to effectively convey emotion from the hybrid-robot face to humans. Conveyance is quantified by studying neurophysiological electroencephalography (EEG) response to perceived emotional information as well as through qualitative interviews. Results demonstrate core hybrid-face robotic expressions can be discriminated by humans (80%+ recognition) and invoke face-sensitive neurophysiological event-related potentials such as N170 and Vertex Positive Potentials in EEG. The hybrid-face robot concept has been modified, implemented, and released by Emotix Inc in the commercial IoT robotic platform Miko (‘My Companion’), an affective robot currently in use for human-robot interaction with children. We demonstrate that human EEG responses to Miko emotions are comparative to that of the hybrid-face robot validating design modifications implemented for large scale distribution. Finally, interviews show above 90% expression recognition rates in our commercial robot. We conclude that simplified hybrid-face abstraction conveys emotions effectively and enhances human-robot interaction.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.
Awards
- Finalist: Best Paper - IEEE Transactions on Mechatronics (awarded June 2021)
- Finalist: IEEE Transactions on Mechatronics; 1 of 5 finalists for Best Paper in Journal
- Winner: UK Institute of Mechanical Engineers (IMECHE) Healthcare Technologies Early Career Award (awarded June 2021): Awarded to Maria Lima (UKDRI CR&T PhD candidate)
- Winner: Sony Start-up Acceleration Program (awarded May 2021): Spinout company Serg Tech awarded (1 of 4 companies in all of Europe) a place in Sony corporation start-up boot camp
- “An Extended Complementary Filter for Full-Body MARG Orientation Estimation” (CR&T authors: S Wilson, R Vaidyanathan)