Imperial College London

ProfessorRaviVaidyanathan

Faculty of EngineeringDepartment of Mechanical Engineering

Professor in Biomechatronics
 
 
 
//

Contact

 

+44 (0)20 7594 7020r.vaidyanathan CV

 
 
//

Location

 

717City and Guilds BuildingSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Raposo:2021:10.3389/frobt.2021.618866,
author = {Raposo, de Lima M and Wairagkar, M and Natarajan, N and Vaitheswaran, S and Vaidyanathan, R},
doi = {10.3389/frobt.2021.618866},
journal = {Frontiers in Robotics and AI},
title = {Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement},
url = {http://dx.doi.org/10.3389/frobt.2021.618866},
volume = {8},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to sup
AU - Raposo,de Lima M
AU - Wairagkar,M
AU - Natarajan,N
AU - Vaitheswaran,S
AU - Vaidyanathan,R
DO - 10.3389/frobt.2021.618866
PY - 2021///
SN - 2296-9144
TI - Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement
T2 - Frontiers in Robotics and AI
UR - http://dx.doi.org/10.3389/frobt.2021.618866
UR - http://hdl.handle.net/10044/1/87592
VL - 8
ER -