Imperial College London

ProfessorRaviVaidyanathan

Faculty of EngineeringDepartment of Mechanical Engineering

Professor in Biomechatronics
 
 
 
//

Contact

 

+44 (0)20 7594 7020r.vaidyanathan CV

 
 
//

Location

 

717City and Guilds BuildingSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

170 results found

Mashrur FR, Rahman KM, Miya MTI, Vaidyanathan R, Anwar SF, Sarker F, Mamun KAet al., 2022, An intelligent neuromarketing system for predicting consumers' future choice from electroencephalography signals, PHYSIOLOGY & BEHAVIOR, Vol: 253, ISSN: 0031-9384

Journal article

Jing S, Huang H-Y, Vaidyanathan R, Farina Det al., 2022, Accurate and Robust Locomotion Mode Recognition Using High-Density EMG Recordings from a Single Muscle Group., Annu Int Conf IEEE Eng Med Biol Soc, Vol: 2022, Pages: 686-689

Existing methods for human locomotion mode recognition often rely on using multiple bipolar electrode sensors on multiple muscle groups to accurately identify underlying motor activities. To avoid this complex setup and facilitate the translation of this technology, we introduce a single grid of high-density surface electromyography (HDsEMG) electrodes mounted on a single location (above the rectus femoris) to classify six locomotion modes in human walking. By employing a neural network, the trained model achieved average recognition accuracy of 97.7% with 160ms latency, significantly better than the model trained with one bipolar electrode pair placed on the same muscle (71.4% accuracy). To further exploit the spatial and temporal information of HDsEMG, we applied data augmentation to generate artificial data from simulated displaced electrodes, aiming to counteract the influence of electrode shifts. By employing a convolutional neural network with the enhanced dataset, the updated model was not strongly affected by electrode misplacement (93.9% accuracy) while models trained by bipolar electrode data were significantly disrupted by electrode shifts (29.4% accuracy). Findings suggest HDsEMG could be a valuable resource for mapping gait with fewer sensor locations and greater robustness. Results offer future promise for real-time control of assistive technology such as exoskeletons.

Journal article

Hopkins M, Turner S, McGregor A, 2022, Mapping lower-limb prosthesis load distributions using a low-cost pressure measurement system, Frontiers in Medical Technology, Vol: 4, Pages: 1-9, ISSN: 2673-3129

Background: In the UK 55,000 people live with a major limb amputation. The prosthetic socket is problematic for users in relation to comfort and acceptance of the prosthesis; and is associated with the development of cysts and sores.Objectives: We have developed a prototype low-cost system combining low-profile pressure sensitive sensors with an inertial measurement unit to assess loading distribution within prosthetic sockets. The objective of this study was to determine the ability of this prototype to assess in-socket loading profiles of a person with an amputation during walking, with a view to understanding socket design and fit.Methods: The device was evaluated on four transtibial participants of various age and activity levels. The pressure sensors were embedded in the subject's sockets and an inertial measurement unit was attached to the posterior side of the socket. Measurements were taken during level walking in a gait lab.Results: The sensors were able to dynamically collect data, informing loading profiles within the socket which were in line with expected distributions for patellar-tendon-bearing and total-surface-bearing sockets. The patellar tendon bearing subject displayed loading predominately at the patellar tendon, tibial and lateral gastrocnemius regions. The total-surface bearing subjects indicated even load distribution throughout the socket except in one participant who presented with a large socket-foot misalignment.Conclusions: The sensors provided objective data showing the pressure distributions inside the prosthetic socket. The sensors were able to measure the pressure in the socket with sufficient accuracy to distinguish pressure regions that matched expected loading patterns. The information may be useful to aid fitting of complex residual limbs and for those with reduced sensation in their residual limb, alongside the subjective feedback from prosthesis users.

Journal article

Nazneen T, Islam IB, Sajal MSR, Jamal W, Amin MA, Vaidyanathan R, Chau T, Mamun KAet al., 2022, Recent Trends in Non-invasive Neural Recording Based Brain-to-Brain Synchrony Analysis on Multidisciplinary Human Interactions for Understanding Brain Dynamics: A Systematic Review, FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, Vol: 16

Journal article

Mashrur FR, Rahman KM, Miya MTI, Vaidyanathan R, Anwar SF, Sarker F, Mamun KAet al., 2022, BCI-Based Consumers' Choice Prediction From EEG Signals: An Intelligent Neuromarketing Framework, FRONTIERS IN HUMAN NEUROSCIENCE, Vol: 16, ISSN: 1662-5161

Journal article

Wairagkar M, Lima MR, Bazo D, Craig R, Weissbart H, Etoundi AC, Reichenbach T, Iyenger P, Vaswani S, James C, Barnaghi P, Melhuish C, Vaidyanathan Ret al., 2022, Emotive response to a hybrid-face robot and translation to consumer social robots, IEEE Internet of Things Journal, Vol: 9, Pages: 3174-3188, ISSN: 2327-4662

We present the conceptual formulation, design, fabrication, control and commercial translation of an IoT enabled social robot as mapped through validation of human emotional response to its affective interactions. The robot design centres on a humanoid hybrid-face that integrates a rigid faceplate with a digital display to simplify conveyance of complex facial movements while providing the impression of three-dimensional depth. We map the emotions of the robot to specific facial feature parameters, characterise recognisability of archetypical facial expressions, and introduce pupil dilation as an additional degree of freedom for emotion conveyance. Human interaction experiments demonstrate the ability to effectively convey emotion from the hybrid-robot face to humans. Conveyance is quantified by studying neurophysiological electroencephalography (EEG) response to perceived emotional information as well as through qualitative interviews. Results demonstrate core hybrid-face robotic expressions can be discriminated by humans (80%+ recognition) and invoke face-sensitive neurophysiological event-related potentials such as N170 and Vertex Positive Potentials in EEG. The hybrid-face robot concept has been modified, implemented, and released by Emotix Inc in the commercial IoT robotic platform Miko (‘My Companion’), an affective robot currently in use for human-robot interaction with children. We demonstrate that human EEG responses to Miko emotions are comparative to that of the hybrid-face robot validating design modifications implemented for large scale distribution. Finally, interviews show above 90% expression recognition rates in our commercial robot. We conclude that simplified hybrid-face abstraction conveys emotions effectively and enhances human-robot interaction.

Journal article

Mancero Castillo CS, Vaidyanathan R, Atashzar SF, 2022, Synergistic upper-limb functional muscle connectivity using acoustic meganomyography, IEEE Transactions on Biomedical Engineering, Vol: 69, Pages: 2569-2580, ISSN: 0018-9294

Functional connectivity is a critical concept in describing synergistic muscle synchronization for the execution of complex motor tasks. Muscle synchronization is typically derived from the decomposition of intermuscular coherence (IMC) at different frequency bands through electromyography (EMG) signal analysis with limited out-of-clinic applications. In this investigation, we introduce muscle network analysis to assess the coordination and functional connectivity of muscles based on mechanomyography (MMG), focused on a targeted group of muscles that are typically active in the conduction of activities of daily living using the upper limb. In this regard, functional muscle networks are evaluated in this paper for ten able-bodied participants and three amputees. MMG activity was acquired from a custom-made wearable MMG armband placed over four superficial muscles around the forearm (i.e., flexor carpi radialis (FCR), brachioradialis (BR), extensor digitorum communis (EDC), and flexor carpi ulnaris (FCU)) while participants performed four different hand gestures. The results of connectivity analysis at multiple frequency bands showed significant topographical differences across gestures for low (< 5Hz) and high (> 12 Hz) frequencies and observable differences between able-bodied and amputee subjects. These findings show evidence that MMG can be used for the analysis of functional muscle connectivity and mapping of synergistic synchronization of upper-limb muscles in complex upper-limb tasks. The new physiological modality further provides key insights into the neural circuitry of motor coordination and offers the concomitant outcomes of demonstrating the feasibility of MMG to map muscle coherence from a neurophysiological perspective as well as providing the mechanistic basis for its translation into human-robot interfaces.

Journal article

Natarajan N, Vaitheswaran S, Raposo de Lima M, Wairagkar M, Vaidyanathan Ret al., 2022, Acceptability of social robots and adaptation of hybrid-face robot for dementia care in India: a qualitative study, American Journal of Geriatric Psychiatry, Vol: 30, Pages: 240-245, ISSN: 1064-7481

ObjectivesThis study aims to understand the acceptability of social robots and the adaptation of the Hybrid-Face Robot for dementia care in India.MethodsWe conducted a focus group discussion and in-depth interviews with persons with dementia (PwD), their caregivers, professionals in the field of dementia, and technical experts in robotics to collect qualitative data.ResultsThis study explored the following themes: Acceptability of Robots in Dementia Care in India, Adaptation of Hybrid-Face Robot and Future of Robots in Dementia Care. Caregivers and PwD were open to the idea of social robot use in dementia care; caregivers perceived it to help with the challenges of caregiving and positively viewed a future with robots.DiscussionThis study is the first of its kind to explore the use of social robots in dementia care in India by highlighting user needs and requirements that determine acceptability and guiding adaptation.

Journal article

Dev A, Roy N, Islam MK, Biswas C, Ahmed HU, Amin MA, Sarker F, Vaidyanathan R, Mamun KAet al., 2022, Exploration of EEG-Based Depression Biomarkers Identification Techniques and Their Applications: A Systematic Review, IEEE ACCESS, Vol: 10, Pages: 16756-16781, ISSN: 2169-3536

Journal article

Amerini R, gupta L, Steadman N, Annauth K, Burr C, Wilson S, Barnaghi P, Vaidyanathan Ret al., 2021, Fusion models for generalized classification of multi-axial human movement: validation in sport performance, Sensors, Vol: 21, ISSN: 1424-8220

We introduce a set of input models for fusing information from ensembles of wearable sensors supporting human performance and telemedicine. Veracity is demonstrated in action classification related to sport, specifically strikes in boxing and taekwondo. Four input models, formulated to be compatible with a broad range of classifiers, are introduced and two diverse classifiers, dynamic time warping (DTW) and convolutional neural networks (CNNs) are implemented in conjunction with the input models. Seven classification models fusing information at the input-level, output-level, and a combination of both are formulated. Action classification for 18 boxing punches and 24 taekwondo kicks demonstrate our fusion classifiers outperform the best DTW and CNN uni-axial classifiers. Furthermore, although DTW is ostensibly an ideal choice for human movements experiencing non-linear variations, our results demonstrate deep learning fusion classifiers outperform DTW. This is a novel finding given that CNNs are normally designed for multi-dimensional data and do not specifically compensate for non-linear variations within signal classes. The generalized formulation enables subject-specific movement classification in a feature-blind fashion with trivial computational expense for trained CNNs. A commercial boxing system, ‘Corner’, has been produced for real-world mass-market use based on this investigation providing a basis for future telemedicine translation.

Journal article

Wairagkar M, De Lima MR, Harrison M, Batey P, Daniels S, Barnaghi P, Sharp DJ, Vaidyanathan Ret al., 2021, Conversational artificial intelligence and affective social robot for monitoring health and well-being of people with dementia., Alzheimers & Dementia, Vol: 17 Suppl 11, Pages: e053276-e053276, ISSN: 1552-5260

BACKGROUND: Social robots are anthropomorphised platforms developed to interact with humans, using natural language, offering an accessible and intuitive interface suited to diverse cognitive abilities. Social robots can be used to support people with dementia (PwD) and carers in their homes managing medication, hydration, appointments, and evaluating mood, wellbeing, and potentially cognitive decline. Such robots have potential to reduce care burden and prolong independent living, yet translation into PwD use remains insignificant. METHOD: We have developed two social robots - a conversational robot and a digital social robot for mobile devices capable of communicating through natural language (powered by Amazon Alexa) and facial expressions that ask PwD daily questions about their health and wellbeing and also provide digital assistant functionality. We record data comprising of PwD's responses to daily questions, audio speech and text of conversations with Alexa to automatically monitor their health and wellbeing using machine learning. We followed user-centric development processes by conducting focus groups with 13 carers, 2 PwD and 5 clinicians to iterate the design. We are testing social robot with 3 PwD in their homes for ten weeks. RESULT: We received positive feedback on social robot from focus group participants. Ease of use, low maintenance, accessibility, assistance with medication, supporting with health and wellbeing were identified as the key opportunities for social robots. Based on responses to a daily questionnaire, our robots generate a report detailing PwD wellbeing that is automatically sent via email to family members or carers. This information is also stored systematically in a database that can help clinicians monitor their patients remotely. We use natural language processing to analyse conversations and identify topics of interest to PwD such that robot behaviour could be adapted. We process speech using signal processing and machine lear

Journal article

Caulcrick C, Huo W, Franco E, Mohammed S, Hoult W, Vaidyanathan Ret al., 2021, Model predictive control for human-centred lower limb robotic assistance, IEEE Transactions on Medical Robotics and Bionics, Vol: 3, Pages: 980-991, ISSN: 2576-3202

Loss of mobility and/or balance resulting from neural trauma is a critical public health issue. Robotic exoskeletons hold great potential for rehabilitation and assisted movement. However, the synergy of robot operation with human effort remains a problem. In particular, optimal assist-as-needed (AAN) control remains unresolved given pathological variance among patients. We introduce a model predictive control (MPC) architecture for lower limb exoskeletons that achieves on-the-fly transitions between modes of assistance. The architecture implements a fuzzy logic algorithm (FLA) to map key modes of assistance based on human involvement. Three modes are utilised: passive, for human relaxed and robot dominant; active-assist, for human cooperation with the task; and safety, in the case of human resistance to the robot. Electromyography (EMG) signals are further employed to predict the human torque. EMG output is used by the MPC for trajectory following and by the FLA for decision making. Experimental validation using a 1-DOF knee exoskeleton demonstrates the controller tracking a sinusoidal trajectory with relaxed, assistive, and resistive operational modes. Results demonstrate rapid and appropriate transfers among the assistance modes, and satisfactory AAN performance in each case, offering a new level of human-robot synergy for mobility assist and rehabilitation.

Journal article

Huo W, Caulcrick C, Hoult W, Vaidyanathan Ret al., 2021, Human joint torque modelling with mmg and emg during lower limb human-exoskeleton interaction, IEEE Robotics and Automation Letters, Vol: 6, Pages: 7185-7192, ISSN: 2377-3766

Human-robot cooperation is vital for optimising powered assist of lower limb exoskeletons (LLEs). Robotic capacity to intelligently adapt to human force, however, demands a fusion of data from exoskeleton and user state for smooth human-robot synergy. Muscle activity, mapped through electromyography (EMG) or mechanomyography (MMG) is widely acknowledged as usable sensor input that precedes the onset of human joint torque. However, competing and complementary information between such physiological feedback is yet to be exploited, or even assessed, for predictive LLE control. We investigate complementary and competing benefits of EMG and MMG sensing modalities as a means of calculating human torque input for assist-as-needed (AAN) LLE control. Three biomechanically agnostic machine learning approaches, linear regression, polynomial regression, and neural networks, are implemented for joint torque prediction during human-exoskeleton interaction experiments. Results demonstrate MMG predicts human joint torque with slightly lower accuracy than EMG for isometric human-exoskeleton interaction. Performance is comparable for dynamic exercise. Neural network models achieve the best performance for both MMG and EMG (94.8 ± 0.7% with MMG and 97.6 ± 0.8% with EMG (Mean ± SD)) at the expense of training time and implementation complexity. This investigation represents the first MMG human joint torque models for LLEs and their first comparison with EMG. We provide our implementations for future investigations ( https://github.com/cic12/ieee_appx ).

Journal article

Lima MR, Wairagkar M, Gupta M, Baena FRY, Barnaghi P, Sharp DJ, Vaidyanathan Ret al., 2021, Conversational affective social robots for ageing and dementia support, IEEE Transactions on Cognitive and Developmental Systems, ISSN: 2379-8920

Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation.

Journal article

Huo W, Moon H, Alouane MA, Bonnet V, Huang J, Amirat Y, Vaidyanathan R, Mohammed Set al., 2021, Impedance modulation control of a lower limb exoskeleton to assist sit-to-stand movements, IEEE Transactions on Robotics, Vol: 38, Pages: 1230-1249, ISSN: 1552-3098

As an important movement of the daily living activities, sit-to-stand (STS) movement is usually a difficult task facingelderly and dependent people. In this article, a novel impedancemodulation strategy of a lower limb exoskeleton is proposed toprovide appropriate power and balance assistance during STSmovements while preserving the wearer’s control priority. Theimpedance modulation control strategy ensures adaptation of themechanical impedance of the human-exoskeleton system towardsa desired one requiring less wearer’s effect while reinforcing thewearer’s balance control ability during STS movements. A humanjoint torque observer is designed to estimate the joint torquesdeveloped by the wearer using joint position kinematics instead ofelectromyography (EMG) or force sensors; a time-varying desiredimpedance model is proposed according to the wearer’s lowerlimb motion ability. A virtual environmental force is designedfor the balance reinforcement control. Stability and robustness ofthe proposed method are theoretically analyzed. Simulations wereimplemented to illustrate the characteristics and performance ofthe proposed approach. Experiments with four healthy subjectswere carried out to evaluate the effectiveness of the proposedmethod and show satisfactory results in terms of appropriatepower assist and balance reinforcement.

Journal article

Wattanasiri P, Wilson S, Huo W, Lewis A, Kapatos C, Vaidyanathan Ret al., 2021, Adaptive Mechanomyogram Hand Gesture Recognition in Online and Repeatable Environment, Pages: 2315-2321, ISSN: 2161-8070

We introduce a complete architecture for realtime hand gesture recognition for human-computer interface (HCI) and robotic control. The system addresses ease of use, calibration, and robustness issues which have inhibited gesture recognition wearables in the field. Our system is packaged as a generic (non-customized) arm wearable that integrates: 1) a novel mechanomyogram (MMG) sensing suite; 2) an integrated inertial measurement unit (IMU); 3) accompanying data acquisition and transmission hardware; and 4) real-time signal recognition algorithms to run on the receiving peripheral (e.g. computer, robot, etc.). We implement a rapid training routine capable of grasp pattern identification from small samples (20 per gesture) with less than 5-minute calibration time, which yields immediate real-time accuracies of 84% in amputees (3 gestures) and 89% in non-amputees (5 gestures), with the capacity to scale as users become more comfortable (accurate) with generated gestures. In repeated (5-day) usage with regular donning and doffing of the armband, 89%-91% accuracy is achieved with non-amputees using data over the previous days for reparameterization. Findings demonstrate the capacity to adapt to new able-bodied and amputee subjects with a generic armband and small training datasets, adapt as user proficiency increases, and provide consistent prediction for regular long-term use.

Conference paper

Formstone L, Huo W, Wilson S, McGregor A, Bentley P, Vaidyanathan Ret al., 2021, Quantification of motor function post-stroke using wearable inertial and ,echanomyographic Sensors, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 1158-1167, ISSN: 1534-4320

Subjective clinical rating scales represent the goldstandard diagnosis of motor function following stroke, however in practice they suffer from well-recognised limitations including variance between assessors, low inter-rater reliability and low resolution. Automated systems have been proposed for empirical quantification but have significantly impacted clinical practice. We address translational challenges in this arena through: (1) implementation of a novel sensor suite fusing inertial measurement and mechanomyography (MMG) to quantify hand and wrist motor function; and (2) introduction of a new range of signal features extracted from the suite to supplement predicted clinical scores. The wearable sensors, signal features, and sensor fusion algorithms have been combined to produce classified ratings from the Fugl-Meyer clinical assessment rating scale. Furthermore, we have designed the system to augment clinical rating with several sensor-derived supplementary features encompassing critical aspects of motor dysfunction (e.g. joint angle, muscle activity, etc.). Performance is validated through a large-scale study on a poststroke cohort of 64 patients. Fugl-Meyer Assessment tasks were classified with 75% accuracy for gross motor tasks and 62% for hand/wrist motor tasks. Of greater import, supplementary features demonstrated concurrent validity with Fugl-Meyer ratings, evidencing their utility as new measures of motor function suited to automated assessment. Finally, the supplementary features also provide continuous measures of sub-components of motor function, offering the potential to complement low accuracy but well-validated clinical rating scales when high-quality motor outcome measures are required. We believe this work provides a basis for widespread clinical adoption of inertial-MMG sensor use for post-stroke clinical motor assessment.Index Terms—Stroke, Fugl-Meyer assessment, automated upper-limb assessment, wearables, machine learning, mechanomyogra

Journal article

Russell F, Takeda Y, Kormushev P, Vaidyanathan R, Ellison Pet al., 2021, Stiffness modulation in a humanoid robotic leg and knee, IEEE Robotics and Automation Letters, Vol: 6, Pages: 2563-2570, ISSN: 2377-3766

Stiffness modulation in walking is critical to maintain static/dynamic stability as well as minimize energy consumption and impact damage. However, optimal, or even functional, stiffness parameterization remains unresolved in legged robotics.We introduce an architecture for stiffness control utilizing a bioinspired robotic limb consisting of a condylar knee joint and leg with antagonistic actuation. The joint replicates elastic ligaments of the human knee providing tuneable compliance for walking. It further locks out at maximum extension, providing stability when standing. Compliance and friction losses between joint surfaces are derived as a function of ligament stiffness and length. Experimental studies validate utility through quantification of: 1) hip perturbation response; 2) payload capacity; and 3) static stiffness of the leg mechanism.Results prove initiation and compliance at lock out can be modulated independently of friction loss by changing ligament elasticity. Furthermore, increasing co-contraction or decreasing joint angle enables increased leg stiffness, which establishes co-contraction is counterbalanced by decreased payload.Findings have direct application in legged robots and transfemoral prosthetic knees, where biorobotic design could reduce energy expense while improving efficiency and stability. Future targeted impact involves increasing power/weight ratios in walking robots and artificial limbs for increased efficiency and precision in walking control.

Journal article

Raposo de Lima M, Wairagkar M, Natarajan N, Vaitheswaran S, Vaidyanathan Ret al., 2021, Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement, Frontiers in Robotics and AI, Vol: 8, ISSN: 2296-9144

COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to sup

Journal article

Mancero Castillo C, Wilson S, Vaidyanathan R, Atashzar Fet al., 2021, Wearable MMG-plus-one armband: evaluation of normal force on mechanomyography (MMG) to enhance human-machine interfacing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 196-205, ISSN: 1534-4320

In this paper, we introduce a new mode of mechanomyography (MMG) signal capture for enhancing the performance of human-machine interfaces (HMIs) through modulation of normal pressure at the sensor location. Utilizing this novel approach, increased MMG signal resolution is enabled by a tunable degree of freedom normal to the sensor-skin contact area. We detail the mechatronic design, experimental validation, and user study of an armband with embedded acoustic sensors demonstrating this capacity. The design is motivated by the nonlinear viscoelasticity of the tissue, which increases with the normal surface pressure. This, in theory, results in higher conductivity of mechanical waves and hypothetically allows to interface with deeper muscle; thus, enhancing the discriminative information context of the signal space. Ten subjects (seven able-bodied and three trans-radial amputees) participated in a study consisting of the classification of hand gestures through MMG while increasing levels of contact force were administered. Four MMG channels were positioned around the forearm and placed over the flexor carpi radialis, brachioradialis, extensor digitorum communis, and flexor carpi ulnaris muscles. A total of 852 spectrotemporal features were extracted (213 features per each channel) and passed through a Neighborhood Component Analysis (NCA) technique to select the most informative neurophysiological subspace of the features for classification. A linear support vector machine (SVM) then classified the intended motion of the user. The results indicate that increasing the normal force level between the MMG sensor and the skin can improve the discriminative power of the classifier, and the corresponding pattern can be user-specific. These results have significant implications enabling embedding MMG sensors in sockets for prosthetic limb control and HMI.

Journal article

Mashrur FR, Miya MTI, Rawnaque FS, Rahman KM, Vaidyanathan R, Anwar SF, Sarker F, Mamun KAet al., 2021, MarketBrain: An EEG based intelligent consumer preference prediction system, 2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), Pages: 808-811, ISSN: 1557-170X

Journal article

Ghosh AK, Balasubramanian S, Devasahayam S, Vaidyanathan R, Cherian A, Prasad J, Nowlan NCet al., 2020, Detection and Analysis of Fetal Movements Using an Acoustic Sensor-based Wearable Monitor, Pages: 512-516

Monitoring of fetal movements (FM) is considered an important part of fetal well-being assessment due to its association with several fetal health conditions, e.g. fetal distress, fetal growth restriction, hypoxia, etc. However, the current standard methods of FM quantification, e.g. ultrasonography, MRI, and cardiotocography, are limited to their use in clinical environments. In this paper, we evaluate the performance of an acoustic sensor-based, cheap, wearable FM monitor that can be used by pregnant women at home. For data analysis, we develop a thresholding-based signal processing algorithm that fuses outputs from all the sensors to detect FM automatically. Obtained results demonstrate the promising performance of the system with a sensitivity, specificity, and accuracy of 83.3%, 87.8%, and 87.1%, respectively, relative to the maternal sensation of FM. Finally, a spike-like morphology of acoustic signals corresponding to true detected movements is found in the time-frequency domain through spectrogram analysis, which is expected to be useful for developing a more advanced signal processing algorithm to further improve the accuracy of detection.

Conference paper

Gardner M, Mancero Castillo C, Wilson S, Farina D, Burdet E, Khoo BC, Atashzar SF, Vaidyanathan Ret al., 2020, A multimodal intention detection sensor suite for shared autonomy of upper-limb robotic prostheses, Sensors, Vol: 20, ISSN: 1424-8220

Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design.

Journal article

Ghosh AK, Burniston SF, Krentzel D, Roy A, Sheikh AS, Siddiq T, Trinh PMP, Velazquez MM, Vielle H-T, Nowlan NC, Vaidyanathan Ret al., 2020, A novel fetal movement simulator for the performance evaluation of vibration Sensors for wearable fetal movement monitors, Sensors, Vol: 20, ISSN: 1424-8220

Fetal movements (FM) are an important factor in the assessment of fetal health. However, there is currently no reliable way to monitor FM outside clinical environs. While extensive research has been carried out using accelerometer-based systems to monitor FM, the desired accuracy of detection is yet to be achieved. A major challenge has been the difficulty of testing and calibrating sensors at the pre-clinical stage. Little is known about fetal movement features, and clinical trials involving pregnant women can be expensive and ethically stringent. To address these issues, we introduce a novel FM simulator, which can be used to test responses of sensor arrays in a laboratory environment. The design uses a silicon-based membrane with material properties similar to that of a gravid abdomen to mimic the vibrations due to fetal kicks. The simulator incorporates mechanisms to pre-stretch the membrane and to produce kicks similar to that of a fetus. As a case study, we present results from a comparative study of an acoustic sensor, an accelerometer, and a piezoelectric diaphragm as candidate vibration sensors for a wearable FM monitor. We find that the acoustic sensor and the piezoelectric diaphragm are better equipped than the accelerometer to determine durations, intensities, and locations of kicks, as they have a significantly greater response to changes in these conditions than the accelerometer. Additionally, we demonstrate that the acoustic sensor and the piezoelectric diaphragm can detect weaker fetal movements (threshold wall displacements are less than 0.5 mm) compared to the accelerometer (threshold wall displacement is 1.5 mm) with a trade-off of higher power signal artefacts. Finally, we find that the piezoelectric diaphragm produces better signal-to-noise ratios compared to the other two sensors in most of the cases, making it a promising new candidate sensor for wearable FM monitors. We believe that the FM simulator represents a key development towards enabl

Journal article

Sajal MSR, Ehsan MT, Vaidyanathan R, Wang S, Aziz T, Mamun KAAet al., 2020, Telemonitoring Parkinson's disease using machine learning by combining tremor and voice analysis, Brain Inform, Vol: 7, ISSN: 2198-4018

BACKGROUND: With the growing number of the aged population, the number of Parkinson's disease (PD) affected people is also mounting. Unfortunately, due to insufficient resources and awareness in underdeveloped countries, proper and timely PD detection is highly challenged. Besides, all PD patients' symptoms are neither the same nor they all become pronounced at the same stage of the illness. Therefore, this work aims to combine more than one symptom (rest tremor and voice degradation) by collecting data remotely using smartphones and detect PD with the help of a cloud-based machine learning system for telemonitoring the PD patients in the developing countries. METHOD: This proposed system receives rest tremor and vowel phonation data acquired by smartphones with built-in accelerometer and voice recorder sensors. The data are primarily collected from diagnosed PD patients and healthy people for building and optimizing machine learning models that exhibit higher performance. After that, data from newly suspected PD patients are collected, and the trained algorithms are evaluated to detect PD. Based on the majority-vote from those algorithms, PD-detected patients are connected with a nearby neurologist for consultation. Upon receiving patients' feedback after being diagnosed by the neurologist, the system may update the model by retraining using the latest data. Also, the system requests the detected patients periodically to upload new data to track their disease progress. RESULT: The highest accuracy in PD detection using offline data was [Formula: see text] from voice data and [Formula: see text] from tremor data when used separately. In both cases, k-nearest neighbors (kNN) gave the highest accuracy over support vector machine (SVM) and naive Bayes (NB). The application of maximum relevance minimum redundancy (MRMR) feature selection method showed that by selecting different feature sets based on the patient's gender, we could improve the detection accuracy. This st

Journal article

Russell F, Kormushev P, Vaidyanathan R, Ellison Pet al., 2020, The impact of ACL laxity on a bicondylar robotic knee and implications in human joint biomechanics, IEEE Transactions on Biomedical Engineering, Vol: 67, Pages: 2817-2827, ISSN: 0018-9294

Objective: Elucidating the role of structural mechanisms in the knee can improve joint surgeries, rehabilitation, and understanding of biped locomotion. Identification of key features, however, is challenging due to limitations in simulation and in-vivo studies. In particular the coupling of the patello-femoral and tibio-femoral joints with ligaments and its impact on joint mechanics and movement is not understood. We investigate this coupling experimentally through the design and testing of a robotic sagittal plane model. Methods: We constructed a sagittal plane robot comprised of: 1) elastic links representing cruciate ligaments; 2) a bi-condylar joint; 3) a patella; and 4) actuator hamstrings and quadriceps. Stiffness and geometry were derived from anthropometric data. 10° - 110° squatting tests were executed at speeds of 0.1 - 0.25Hz over a range of anterior cruciate ligament (ACL) slack lengths. Results: Increasing ACL length compromised joint stability, yet did not impact quadriceps mechanical advantage and force required for squat. The trend was consistent through varying condyle contact point and ligament force changes. Conclusion: The geometry of the condyles allows the ratio of quadriceps to patella tendon force to compensate for contact point changes imparted by the removal of the ACL. Thus the system maintains a constant mechanical advantage. Significance: The investigation uncovers critical features of human knee biomechanics. Findings contribute to understanding of knee ligament damage, inform procedures for knee surgery and orthopaedic implant design, and support design of trans-femoral prosthetics and walking robots. Results further demonstrate the utility of robotics as a powerful means of studying human joint biomechanics.

Journal article

Masen MA, Chung A, Dawczyk JU, Dunning Z, Edwards L, Guyott C, Hall TAG, Januszewski RC, Jiang S, Jobanputra RD, Karunaseelan KJ, Kalogeropoulos N, Lima MR, Mancero Castillo CS, Mohammed IK, Murali M, Paszkiewicz FP, Plotczyk M, Pruncu CI, Rodgers E, Russell F, Silversides R, Stoddart JC, Tan Z, Uribe D, Yap KK, Zhou X, Vaidyanathan Ret al., 2020, Evaluating lubricant performance to reduce COVID-19 PPE-related skin injury, PLoS One, Vol: 15, Pages: e0239363-e0239363, ISSN: 1932-6203

BackgroundHealthcare workers around the world are experiencing skin injury due to the extended use of personal protective equipment (PPE) during the COVID-19 pandemic. These injuries are the result of high shear stresses acting on the skin, caused by friction with the PPE. This study aims to provide a practical lubricating solution for frontline medical staff working a 4+ hours shift wearing PPE.MethodsA literature review into skin friction and skin lubrication was conducted to identify products and substances that can reduce friction. We evaluated the lubricating performance of commercially available products in vivo using a custom-built tribometer.FindingsMost lubricants provide a strong initial friction reduction, but only few products provide lubrication that lasts for four hours. The response of skin to friction is a complex interplay between the lubricating properties and durability of the film deposited on the surface and the response of skin to the lubricating substance, which include epidermal absorption, occlusion, and water retention.InterpretationTalcum powder, a petrolatum-lanolin mixture, and a coconut oil-cocoa butter-beeswax mixture showed excellent long-lasting low friction. Moisturising the skin results in excessive friction, and the use of products that are aimed at ‘moisturising without leaving a non-greasy feel’ should be prevented. Most investigated dressings also demonstrate excellent performance.

Journal article

Rawnaque FS, Rahman KM, Anwar SF, Vaidyanathan R, Chau T, Sarker F, Mamun KAAet al., 2020, Technological advancements and opportunities in Neuromarketing: a systematic review, Brain Informatics, Vol: 7, ISSN: 2198-4018

Neuromarketing has become an academic and commercial area of interest, as the advancements in neural recording techniques and interpreting algorithms have made it an effective tool for recognizing the unspoken response of consumers to the marketing stimuli. This article presents the very first systematic review of the technological advancements in Neuromarketing field over the last 5 years. For this purpose, authors have selected and reviewed a total of 57 relevant literatures from valid databases which directly contribute to the Neuromarketing field with basic or empirical research findings. This review finds consumer goods as the prevalent marketing stimuli used in both product and promotion forms in these selected literatures. A trend of analyzing frontal and prefrontal alpha band signals is observed among the consumer emotion recognition-based experiments, which corresponds to frontal alpha asymmetry theory. The use of electroencephalogram (EEG) is found favorable by many researchers over functional magnetic resonance imaging (fMRI) in video advertisement-based Neuromarketing experiments, apparently due to its low cost and high time resolution advantages. Physiological response measuring techniques such as eye tracking, skin conductance recording, heart rate monitoring, and facial mapping have also been found in these empirical studies exclusively or in parallel with brain recordings. Alongside traditional filtering methods, independent component analysis (ICA) was found most commonly in artifact removal from neural signal. In consumer response prediction and classification, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) have performed with the highest average accuracy among other machine learning algorithms used in these literatures. The authors hope, this review will assist the future researchers with vital information in the field of Neuromarketing for making novel contributions.

Journal article

Morad S, Ulbricht C, Harkin P, Chan J, Parker K, Vaidyanathan Ret al., 2020, Surgical Robot Platform with a Novel Concentric Joint for Minimally Invasive Procedures, Journal of Medical Robotics Research, Vol: 5, ISSN: 2424-9068

In this paper, a surgical robot platform with a novel concentric connector joint (CCJ) is presented. The surgical robot is a parallel robot platform comprised of multiple struts, arranged in a geometrically stable array, connected at their end points via the CCJ. The CCJ joints have near-perfect concentricity of rotation around the node point, which enables the tension and compression forces of the struts to be resolved in a structurally-efficient manner. The preliminary feasibility tests, modeling and simulations were introduced.

Journal article

Madgwick SOH, Wilson S, Turk R, Burridge J, Kapatos C, Vaidyanathan Ret al., 2020, An extended complementary filter (ECF) for full-body MARG orientation estimation, IEEE/ASME Transactions on Mechatronics, Vol: 25, Pages: 2054-2064, ISSN: 1083-4435

Inertial sensing suites now permeate all forms of smart automation, yet a plateau exists in real-world derivation of global orientation. Magnetic field fluctuations and inefficient sensor fusion still inhibit deployment. We introduce a new algorithm, an Extended Complementary Filter (ECF), to derive 3D rigid body orientation from inertial sensing suites addressing these challenges. The ECF combines computational efficiency of classic complementary filters with improved accuracy compared to popular optimization filters. We present a complete formulation of the algorithm, including an extension to address the challenge of orientation accuracy in the presence of fluctuating magnetic fields. Performance is tested under a variety of conditions and benchmarked against the commonly used gradient decent (GDA) inertial sensor fusion algorithm. Results demonstrate improved efficiency, with the ECF achieving convergence 30% faster than standard alternatives. We further demonstrate an improved robustness to sources of magnetic interference in pitch and roll and to fast changes of orientation in the yaw direction. The ECF has been implemented at the core of a wearable rehabilitation system tracking movement of stroke patients for home telehealth. The ECF and accompanying magnetic disturbance rejection algorithm enables previously unachievable real-time patient movement feedback in the form of a full virtual human (avatar), even in the presence of magnetic disturbance. Algorithm efficiency and accuracy have also spawned an entire commercial product line released by the company x-io. We believe the ECF and accompanying magnetic disturbance routines are key enablers for future widespread use of wearable systems with the capacity for global orientation tracking

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00698145&limit=30&person=true