Collage of published research papers

Search or filter publications

Filter by type:

Filter by publication type

Filter by year:



  • Showing results for:
  • Reset all filters

Search results

  • Conference paper
    Zhao Y, Barnaghi P, Haddadi H, 2022,

    Multimodal federated learning on IoT data

    , 2022 IEEE/ACM Seventh International Conference on Internet-of-Things Design and Implementation (IoTDI), Publisher: IEEE

    Federated learning is proposed as an alternative to centralized machine learning since its client-server structure provides better privacy protection and scalability in real-world applications. In many applications, such as smart homes with Internet-of-Things (IoT) devices, local data on clients are generated from different modalities such as sensory, visual, and audio data. Existing federated learning systems only work on local data from a single modality, which limits the scalability of the systems. In this paper, we propose a multimodal and semi-supervised federated learning framework that trains autoencoders to extract shared or correlated representations from different local data modalities on clients. In addition, we propose a multimodal FedAvg algorithm to aggregate local autoencoders trained on different data modalities. We use the learned global autoencoder for a downstream classification task with the help of auxiliary labelled data on the server. We empirically evaluate our framework on different modalities including sensory data, depth camera videos, and RGB camera videos. Our experimental results demonstrate that introducing data from multiple modalities into federated learning can improve its classification performance. In addition, we can use labelled data from only one modality for supervised learning on the server and apply the learned model to testing data from other modalities to achieve decent F1 scores (e.g., with the best performance being higher than 60%), especially when combining contributions from both unimodal clients and multimodal clients.

  • Journal article
    Wairagkar M, Lima MR, Bazo D, Craig R, Weissbart H, Etoundi AC, Reichenbach T, Iyenger P, Vaswani S, James C, Barnaghi P, Melhuish C, Vaidyanathan Ret al., 2022,

    Emotive response to a hybrid-face robot and translation to consumer social robots

    , IEEE Internet of Things Journal, Vol: 9, Pages: 3174-3188, ISSN: 2327-4662

    We present the conceptual formulation, design, fabrication, control and commercial translation of an IoT enabled social robot as mapped through validation of human emotional response to its affective interactions. The robot design centres on a humanoid hybrid-face that integrates a rigid faceplate with a digital display to simplify conveyance of complex facial movements while providing the impression of three-dimensional depth. We map the emotions of the robot to specific facial feature parameters, characterise recognisability of archetypical facial expressions, and introduce pupil dilation as an additional degree of freedom for emotion conveyance. Human interaction experiments demonstrate the ability to effectively convey emotion from the hybrid-robot face to humans. Conveyance is quantified by studying neurophysiological electroencephalography (EEG) response to perceived emotional information as well as through qualitative interviews. Results demonstrate core hybrid-face robotic expressions can be discriminated by humans (80%+ recognition) and invoke face-sensitive neurophysiological event-related potentials such as N170 and Vertex Positive Potentials in EEG. The hybrid-face robot concept has been modified, implemented, and released by Emotix Inc in the commercial IoT robotic platform Miko (‘My Companion’), an affective robot currently in use for human-robot interaction with children. We demonstrate that human EEG responses to Miko emotions are comparative to that of the hybrid-face robot validating design modifications implemented for large scale distribution. Finally, interviews show above 90% expression recognition rates in our commercial robot. We conclude that simplified hybrid-face abstraction conveys emotions effectively and enhances human-robot interaction.

  • Journal article
    Natarajan N, Vaitheswaran S, Raposo de Lima M, Wairagkar M, Vaidyanathan Ret al., 2022,

    Acceptability of social robots and adaptation of hybrid-face robot for dementia care in India: a qualitative study

    , American Journal of Geriatric Psychiatry, Vol: 30, Pages: 240-245, ISSN: 1064-7481

    ObjectivesThis study aims to understand the acceptability of social robots and the adaptation of the Hybrid-Face Robot for dementia care in India.MethodsWe conducted a focus group discussion and in-depth interviews with persons with dementia (PwD), their caregivers, professionals in the field of dementia, and technical experts in robotics to collect qualitative data.ResultsThis study explored the following themes: Acceptability of Robots in Dementia Care in India, Adaptation of Hybrid-Face Robot and Future of Robots in Dementia Care. Caregivers and PwD were open to the idea of social robot use in dementia care; caregivers perceived it to help with the challenges of caregiving and positively viewed a future with robots.DiscussionThis study is the first of its kind to explore the use of social robots in dementia care in India by highlighting user needs and requirements that determine acceptability and guiding adaptation.

  • Journal article
    Lima MR, Wairagkar M, Gupta M, Baena FRY, Barnaghi P, Sharp DJ, Vaidyanathan Ret al., 2021,

    Conversational affective social robots for ageing and dementia support

    , IEEE Transactions on Cognitive and Developmental Systems, ISSN: 2379-8920

    Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation.

  • Journal article
    Ahmadi N, Constandinou T, Bouganis C, 2021,

    Inferring entire spiking activity from local field potentials

    , Scientific Reports, Vol: 11, Pages: 1-13, ISSN: 2045-2322

    Extracellular recordings are typically analysed by separating them into two distinct signals: local field potentials (LFPs) andspikes. Previous studies have shown that spikes, in the form of single-unit activity (SUA) or multiunit activity (MUA), can beinferred solely from LFPs with moderately good accuracy. SUA and MUA are typically extracted via threshold-based techniquewhich may not be reliable when the recordings exhibit a low signal-to-noise ratio (SNR). Another type of spiking activity, referredto as entire spiking activity (ESA), can be extracted by a threshold-less, fast, and automated technique and has led to betterperformance in several tasks. However, its relationship with the LFPs has not been investigated. In this study, we aim toaddress this issue by inferring ESA from LFPs intracortically recorded from the motor cortex area of three monkeys performingdifferent tasks. Results from long-term recording sessions and across subjects revealed that ESA can be inferred from LFPswith good accuracy. On average, the inference performance of ESA was consistently and significantly higher than those of SUAand MUA. In addition, local motor potential (LMP) was found to be the most predictive feature. The overall results indicate thatLFPs contain substantial information about spiking activity, particularly ESA. This could be useful for understanding LFP-spikerelationship and for the development of LFP-based BMIs.

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-t4-html.jsp Query String: id=1198&limit=5&respub-action=search.html Current Millis: 1664750493960 Current Time: Sun Oct 02 23:41:33 BST 2022


  • Finalist: Best Paper - IEEE Transactions on Mechatronics (awarded June 2021)

  • Finalist: IEEE Transactions on Mechatronics; 1 of 5 finalists for Best Paper in Journal

  • Winner: UK Institute of Mechanical Engineers (IMECHE) Healthcare Technologies Early Career Award (awarded June 2021): Awarded to Maria Lima (UKDRI CR&T PhD candidate)

  • Winner: Sony Start-up Acceleration Program (awarded May 2021): Spinout company Serg Tech awarded (1 of 4 companies in all of Europe) a place in Sony corporation start-up boot camp

  • “An Extended Complementary Filter for Full-Body MARG Orientation Estimation” (CR&T authors: S Wilson, R Vaidyanathan)