173 results found
Vaidyanathan R, 2023, 3D Muscle Networks based on Vibrational Mechanomyography, Journal of Neural Engineering, ISSN: 1741-2552
Raposo de Lima M, Vaidyanathan R, Barnaghi P, 2023, Discovering behavioural patterns using conversational technology for in-home health and well-being monitoring, IEEE Internet of Things Journal, ISSN: 2327-4662
Advancements in conversational AI have createdunparalleled opportunities to promote the independence andwell-being of older adults, including people living with dementia(PLWD). However, conversational agents have yet to demonstratea direct impact in supporting target populations at home,particularly with long-term user benefits and clinical utility. Weintroduce an infrastructure fusing in-home activity data capturedby Internet of Things (IoT) technologies with voice interactionsusing conversational technology (Amazon Alexa). We collect 3103person-days of voice and environmental data across 14 households with PLWD to identify behavioural patterns. Interactionsinclude an automated well-being questionnaire and 10 topics ofinterest, identified using topic modelling. Although a significantdecrease in conversational technology usage was observed afterthe novelty phase across the cohort, steady state data acquisitionfor modelling was sustained. We analyse household activitysequences preceding or following Alexa interactions throughpairwise similarity and clustering methods. Our analysis demonstrates the capability to identify individual behavioural patterns,changes in those patterns and the corresponding time periods.We further report that households with PLWD continued usingAlexa following clinical events (e.g., hospitalisations), which offersa compelling opportunity for proactive health and well-beingdata gathering related to medical changes. Results demonstratethe promise of conversational AI in digital health monitoringfor ageing and dementia support and offer a basis for trackinghealth and deterioration as indicated by household activity, whichcan inform healthcare professionals and relevant stakeholdersfor timely interventions. Future work will use the bespokebehavioural patterns extracted to create more personalised AIconversations.
Su T, Calvo RA, Jouaiti M, et al., 2023, Assessing a sleep interviewing chatbot to improve subjective and objective sleep: protocol for an observational feasibility study, JMIR Research Protocols, Vol: 12, Pages: 1-10, ISSN: 1929-0748
BACKGROUND: Sleep disorders are common among the aging population and people with neurodegenerative diseases. Sleep disorders have a strong bidirectional relationship with neurodegenerative diseases, where they accelerate and worsen one another. Although one-to-one individual cognitive behavioral interventions (conducted in-person or on the internet) have shown promise for significant improvements in sleep efficiency among adults, many may experience difficulties accessing interventions with sleep specialists, psychiatrists, or psychologists. Therefore, delivering sleep intervention through an automated chatbot platform may be an effective strategy to increase the accessibility and reach of sleep disorder intervention among the aging population and people with neurodegenerative diseases. OBJECTIVE: This work aims to (1) determine the feasibility and usability of an automated chatbot (named MotivSleep) that conducts sleep interviews to encourage the aging population to report behaviors that may affect their sleep, followed by providing personalized recommendations for better sleep based on participants' self-reported behaviors; (2) assess the self-reported sleep assessment changes before, during, and after using our automated sleep disturbance intervention chatbot; (3) assess the changes in objective sleep assessment recorded by a sleep tracking device before, during, and after using the automated chatbot MotivSleep. METHODS: We will recruit 30 older adult participants from West London for this pilot study. Each participant will have a sleep analyzer installed under their mattress. This contactless sleep monitoring device passively records movements, heart rate, and breathing rate while participants are in bed. In addition, each participant will use our proposed chatbot MotivSleep, accessible on WhatsApp, to describe their sleep and behaviors related to their sleep and receive personalized recommendations for better sleep tailored to their specific reasons for disrup
Martineau T, He S, Vaidyanathan R, et al., 2023, Hyper-parameter tuning and feature extraction for asynchronous action detection from sub-thalamic nucleus local field potentials., Front Hum Neurosci, Vol: 17, ISSN: 1662-5161
INTRODUCTION: Decoding brain states from subcortical local field potentials (LFPs) indicative of activities such as voluntary movement, tremor, or sleep stages, holds significant potential in treating neurodegenerative disorders and offers new paradigms in brain-computer interface (BCI). Identified states can serve as control signals in coupled human-machine systems, e.g., to regulate deep brain stimulation (DBS) therapy or control prosthetic limbs. However, the behavior, performance, and efficiency of LFP decoders depend on an array of design and calibration settings encapsulated into a single set of hyper-parameters. Although methods exist to tune hyper-parameters automatically, decoders are typically found through exhaustive trial-and-error, manual search, and intuitive experience. METHODS: This study introduces a Bayesian optimization (BO) approach to hyper-parameter tuning, applicable through feature extraction, channel selection, classification, and stage transition stages of the entire decoding pipeline. The optimization method is compared with five real-time feature extraction methods paired with four classifiers to decode voluntary movement asynchronously based on LFPs recorded with DBS electrodes implanted in the subthalamic nucleus of Parkinson's disease patients. RESULTS: Detection performance, measured as the geometric mean between classifier specificity and sensitivity, is automatically optimized. BO demonstrates improved decoding performance from initial parameter setting across all methods. The best decoders achieve a maximum performance of 0.74 ± 0.06 (mean ± SD across all participants) sensitivity-specificity geometric mean. In addition, parameter relevance is determined using the BO surrogate models. DISCUSSION: Hyper-parameters tend to be sub-optimally fixed across different users rather than individually adjusted or even specifically set for a decoding task. The relevance of each parameter to the optimization problem and comparison
Lima MR, Wairagkar M, Gupta M, et al., 2022, Conversational affective social robots for ageing and dementia support, IEEE Transactions on Cognitive and Developmental Systems, Vol: 14, Pages: 1378-1397, ISSN: 2379-8920
Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation.
Jing S, Huang H-Y, Vaidyanathan R, et al., 2022, Accurate and Robust Locomotion Mode Recognition Using High-Density EMG Recordings from a Single Muscle Group., Annu Int Conf IEEE Eng Med Biol Soc, Vol: 2022, Pages: 686-689
Existing methods for human locomotion mode recognition often rely on using multiple bipolar electrode sensors on multiple muscle groups to accurately identify underlying motor activities. To avoid this complex setup and facilitate the translation of this technology, we introduce a single grid of high-density surface electromyography (HDsEMG) electrodes mounted on a single location (above the rectus femoris) to classify six locomotion modes in human walking. By employing a neural network, the trained model achieved average recognition accuracy of 97.7% with 160ms latency, significantly better than the model trained with one bipolar electrode pair placed on the same muscle (71.4% accuracy). To further exploit the spatial and temporal information of HDsEMG, we applied data augmentation to generate artificial data from simulated displaced electrodes, aiming to counteract the influence of electrode shifts. By employing a convolutional neural network with the enhanced dataset, the updated model was not strongly affected by electrode misplacement (93.9% accuracy) while models trained by bipolar electrode data were significantly disrupted by electrode shifts (29.4% accuracy). Findings suggest HDsEMG could be a valuable resource for mapping gait with fewer sensor locations and greater robustness. Results offer future promise for real-time control of assistive technology such as exoskeletons.
Hopkins M, Turner S, McGregor A, 2022, Mapping lower-limb prosthesis load distributions using a low-cost pressure measurement system, Frontiers in Medical Technology, Vol: 4, Pages: 1-9, ISSN: 2673-3129
Background: In the UK 55,000 people live with a major limb amputation. The prosthetic socket is problematic for users in relation to comfort and acceptance of the prosthesis; and is associated with the development of cysts and sores.Objectives: We have developed a prototype low-cost system combining low-profile pressure sensitive sensors with an inertial measurement unit to assess loading distribution within prosthetic sockets. The objective of this study was to determine the ability of this prototype to assess in-socket loading profiles of a person with an amputation during walking, with a view to understanding socket design and fit.Methods: The device was evaluated on four transtibial participants of various age and activity levels. The pressure sensors were embedded in the subject's sockets and an inertial measurement unit was attached to the posterior side of the socket. Measurements were taken during level walking in a gait lab.Results: The sensors were able to dynamically collect data, informing loading profiles within the socket which were in line with expected distributions for patellar-tendon-bearing and total-surface-bearing sockets. The patellar tendon bearing subject displayed loading predominately at the patellar tendon, tibial and lateral gastrocnemius regions. The total-surface bearing subjects indicated even load distribution throughout the socket except in one participant who presented with a large socket-foot misalignment.Conclusions: The sensors provided objective data showing the pressure distributions inside the prosthetic socket. The sensors were able to measure the pressure in the socket with sufficient accuracy to distinguish pressure regions that matched expected loading patterns. The information may be useful to aid fitting of complex residual limbs and for those with reduced sensation in their residual limb, alongside the subjective feedback from prosthesis users.
Nazneen T, Islam IB, Sajal MSR, et al., 2022, Recent trends in non-invasive neural recording based brain-to-brain synchrony analysis on multidisciplinary human interactions for understanding brain dynamics: a systematic review, Frontiers in Computational Neuroscience, Vol: 16, Pages: 1-19, ISSN: 1662-5188
The study of brain-to-brain synchrony has a burgeoning application in the brain-computer interface (BCI) research, offering valuable insights into the neural underpinnings of interacting human brains using numerous neural recording technologies. The area allows exploring the commonality of brain dynamics by evaluating the neural synchronization among a group of people performing a specified task. The growing number of publications on brain-to-brain synchrony inspired the authors to conduct a systematic review using the PRISMA protocol so that future researchers can get a comprehensive understanding of the paradigms, methodologies, translational algorithms, and challenges in the area of brain-to-brain synchrony research. This review has gone through a systematic search with a specified search string and selected some articles based on pre-specified eligibility criteria. The findings from the review revealed that most of the articles have followed the social psychology paradigm, while 36% of the selected studies have an application in cognitive neuroscience. The most applied approach to determine neural connectivity is a coherence measure utilizing phase-locking value (PLV) in the EEG studies, followed by wavelet transform coherence (WTC) in all of the fNIRS studies. While most of the experiments have control experiments as a part of their setup, a small number implemented algorithmic control, and only one study had interventional or a stimulus-induced control experiment to limit spurious synchronization. Hence, to the best of the authors' knowledge, this systematic review solely contributes to critically evaluating the scopes and technological advances of brain-to-brain synchrony to allow this discipline to produce more effective research outcomes in the remote future.
Mashrur FR, Rahman KM, Miya MTI, et al., 2022, An intelligent neuromarketing system for predicting consumers' future choice from electroencephalography signals, Physiology and Behavior, Vol: 253, Pages: 1-9, ISSN: 0031-9384
Neuromarketing utilizes Brain-Computer Interface (BCI) technologies to provide insight into consumers responses on marketing stimuli. In order to achieve insight information, marketers spend about $400 billionannually on marketing, promotion, and advertisement using traditional marketing research tools. In addition,these tools like personal depth interviews, surveys, focus group discussions, etc. are expensive and frequentlycriticized for failing to extract actual consumer preferences. Neuromarketing, on the other hand, promises toovercome such constraints. In this work, an EEG-based neuromarketing framework is employed for predictingconsumer future choice (affective attitude) while they view E-commerce products. After preprocessing, threetypes of features, namely, time, frequency, and time-frequency domain features are extracted. Then, wrapperbased Support Vector Machine-Recursive Feature Elimination (SVM-RFE) along with correlation bias reduction is used for feature selection. Lastly, we use SVM for categorizing positive affective attitude and negativeaffective attitude. Experiments show that the frontal cortex achieves the best accuracy of 98.67 ± 2.98,98 ± 3.22, and 98.67 ± 3.52 for 5-fold, 10-fold, and leave-one-subject-out (LOSO) respectively. In addition,among all the channels, Fz achieves best accuracy 90 ± 7.81, 90.67 ± 9.53, and 92.67 ± 7.03 for 5-fold, 10-fold,and LOSO respectively. Subsequently, this work opens the door for implementing such a neuromarketingframework using consumer-grade devices in a real-life setting for marketers. As a result, it is evident that EEGbased neuromarketing technologies can assist brands and enterprises in forecasting future consumer preferencesaccurately. Hence, it will pave the way for the creation of an intelligent marketing assistive system for neuromarketing applications in future.
Mashrur FR, Rahman KM, Miya MTI, et al., 2022, BCI-Based consumers' choice prediction from EEG signals: an intelligent neuromarketing framework, Frontiers in Human Neuroscience, Vol: 16, Pages: 1-13, ISSN: 1662-5161
Neuromarketing relies on Brain Computer Interface (BCI) technology to gain insight into how customers react to marketing stimuli. Marketers spend about $750 billion annually on traditional marketing camping. They use traditional marketing research procedures such as Personal Depth Interviews, Surveys, Focused Group Discussions, and so on, which are frequently criticized for failing to extract true consumer preferences. On the other hand, Neuromarketing promises to overcome such constraints. This work proposes a machine learning framework for predicting consumers' purchase intention (PI) and affective attitude (AA) from analyzing EEG signals. In this work, EEG signals are collected from 20 healthy participants while administering three advertising stimuli settings: product, endorsement, and promotion. After preprocessing, features are extracted in three domains (time, frequency, and time-frequency). Then, after selecting features using wrapper-based methods Recursive Feature Elimination, Support Vector Machine is used for categorizing positive and negative (AA and PI). The experimental results show that proposed framework achieves an accuracy of 84 and 87.00% for PI and AA ensuring the simulation of real-life results. In addition, AA and PI signals show N200 and N400 components when people tend to take decision after visualizing static advertisement. Moreover, negative AA signals shows more dispersion than positive AA signals. Furthermore, this work paves the way for implementing such a neuromarketing framework using consumer-grade EEG devices in a real-life setting. Therefore, it is evident that BCI-based neuromarketing technology can help brands and businesses effectively predict future consumer preferences. Hence, EEG-based neuromarketing technologies can assist brands and enterprizes in accurately forecasting future consumer preferences.
Wairagkar M, Lima MR, Bazo D, et al., 2022, Emotive response to a hybrid-face robot and translation to consumer social robots, IEEE Internet of Things Journal, Vol: 9, Pages: 3174-3188, ISSN: 2327-4662
We present the conceptual formulation, design, fabrication, control and commercial translation of an IoT enabled social robot as mapped through validation of human emotional response to its affective interactions. The robot design centres on a humanoid hybrid-face that integrates a rigid faceplate with a digital display to simplify conveyance of complex facial movements while providing the impression of three-dimensional depth. We map the emotions of the robot to specific facial feature parameters, characterise recognisability of archetypical facial expressions, and introduce pupil dilation as an additional degree of freedom for emotion conveyance. Human interaction experiments demonstrate the ability to effectively convey emotion from the hybrid-robot face to humans. Conveyance is quantified by studying neurophysiological electroencephalography (EEG) response to perceived emotional information as well as through qualitative interviews. Results demonstrate core hybrid-face robotic expressions can be discriminated by humans (80%+ recognition) and invoke face-sensitive neurophysiological event-related potentials such as N170 and Vertex Positive Potentials in EEG. The hybrid-face robot concept has been modified, implemented, and released by Emotix Inc in the commercial IoT robotic platform Miko (‘My Companion’), an affective robot currently in use for human-robot interaction with children. We demonstrate that human EEG responses to Miko emotions are comparative to that of the hybrid-face robot validating design modifications implemented for large scale distribution. Finally, interviews show above 90% expression recognition rates in our commercial robot. We conclude that simplified hybrid-face abstraction conveys emotions effectively and enhances human-robot interaction.
Mancero Castillo CS, Vaidyanathan R, Atashzar SF, 2022, Synergistic upper-limb functional muscle connectivity using acoustic meganomyography, IEEE Transactions on Biomedical Engineering, Vol: 69, Pages: 2569-2580, ISSN: 0018-9294
Functional connectivity is a critical concept in describing synergistic muscle synchronization for the execution of complex motor tasks. Muscle synchronization is typically derived from the decomposition of intermuscular coherence (IMC) at different frequency bands through electromyography (EMG) signal analysis with limited out-of-clinic applications. In this investigation, we introduce muscle network analysis to assess the coordination and functional connectivity of muscles based on mechanomyography (MMG), focused on a targeted group of muscles that are typically active in the conduction of activities of daily living using the upper limb. In this regard, functional muscle networks are evaluated in this paper for ten able-bodied participants and three amputees. MMG activity was acquired from a custom-made wearable MMG armband placed over four superficial muscles around the forearm (i.e., flexor carpi radialis (FCR), brachioradialis (BR), extensor digitorum communis (EDC), and flexor carpi ulnaris (FCU)) while participants performed four different hand gestures. The results of connectivity analysis at multiple frequency bands showed significant topographical differences across gestures for low (< 5Hz) and high (> 12 Hz) frequencies and observable differences between able-bodied and amputee subjects. These findings show evidence that MMG can be used for the analysis of functional muscle connectivity and mapping of synergistic synchronization of upper-limb muscles in complex upper-limb tasks. The new physiological modality further provides key insights into the neural circuitry of motor coordination and offers the concomitant outcomes of demonstrating the feasibility of MMG to map muscle coherence from a neurophysiological perspective as well as providing the mechanistic basis for its translation into human-robot interfaces.
Natarajan N, Vaitheswaran S, Raposo de Lima M, et al., 2022, Acceptability of social robots and adaptation of hybrid-face robot for dementia care in India: a qualitative study, American Journal of Geriatric Psychiatry, Vol: 30, Pages: 240-245, ISSN: 1064-7481
ObjectivesThis study aims to understand the acceptability of social robots and the adaptation of the Hybrid-Face Robot for dementia care in India.MethodsWe conducted a focus group discussion and in-depth interviews with persons with dementia (PwD), their caregivers, professionals in the field of dementia, and technical experts in robotics to collect qualitative data.ResultsThis study explored the following themes: Acceptability of Robots in Dementia Care in India, Adaptation of Hybrid-Face Robot and Future of Robots in Dementia Care. Caregivers and PwD were open to the idea of social robot use in dementia care; caregivers perceived it to help with the challenges of caregiving and positively viewed a future with robots.DiscussionThis study is the first of its kind to explore the use of social robots in dementia care in India by highlighting user needs and requirements that determine acceptability and guiding adaptation.
Dev A, Roy N, Islam MK, et al., 2022, Exploration of EEG-Based Depression Biomarkers Identification Techniques and Their Applications: A Systematic Review, IEEE ACCESS, Vol: 10, Pages: 16756-16781, ISSN: 2169-3536
Paszkiewicz FP, Wilson S, Oddsson M, et al., 2022, Microphone Mechanomyography Sensors for Movement Analysis and Identification, 7th IEEE International Conference on Advanced Robotics and Mechatronics, Publisher: IEEE, Pages: 118-125
Amerini R, gupta L, Steadman N, et al., 2021, Fusion models for generalized classification of multi-axial human movement: validation in sport performance, Sensors, Vol: 21, ISSN: 1424-8220
We introduce a set of input models for fusing information from ensembles of wearable sensors supporting human performance and telemedicine. Veracity is demonstrated in action classification related to sport, specifically strikes in boxing and taekwondo. Four input models, formulated to be compatible with a broad range of classifiers, are introduced and two diverse classifiers, dynamic time warping (DTW) and convolutional neural networks (CNNs) are implemented in conjunction with the input models. Seven classification models fusing information at the input-level, output-level, and a combination of both are formulated. Action classification for 18 boxing punches and 24 taekwondo kicks demonstrate our fusion classifiers outperform the best DTW and CNN uni-axial classifiers. Furthermore, although DTW is ostensibly an ideal choice for human movements experiencing non-linear variations, our results demonstrate deep learning fusion classifiers outperform DTW. This is a novel finding given that CNNs are normally designed for multi-dimensional data and do not specifically compensate for non-linear variations within signal classes. The generalized formulation enables subject-specific movement classification in a feature-blind fashion with trivial computational expense for trained CNNs. A commercial boxing system, ‘Corner’, has been produced for real-world mass-market use based on this investigation providing a basis for future telemedicine translation.
Wairagkar M, De Lima MR, Harrison M, et al., 2021, Conversational artificial intelligence and affective social robot for monitoring health and well-being of people with dementia., Alzheimers & Dementia, Vol: 17 Suppl 11, Pages: e053276-e053276, ISSN: 1552-5260
BACKGROUND: Social robots are anthropomorphised platforms developed to interact with humans, using natural language, offering an accessible and intuitive interface suited to diverse cognitive abilities. Social robots can be used to support people with dementia (PwD) and carers in their homes managing medication, hydration, appointments, and evaluating mood, wellbeing, and potentially cognitive decline. Such robots have potential to reduce care burden and prolong independent living, yet translation into PwD use remains insignificant. METHOD: We have developed two social robots - a conversational robot and a digital social robot for mobile devices capable of communicating through natural language (powered by Amazon Alexa) and facial expressions that ask PwD daily questions about their health and wellbeing and also provide digital assistant functionality. We record data comprising of PwD's responses to daily questions, audio speech and text of conversations with Alexa to automatically monitor their health and wellbeing using machine learning. We followed user-centric development processes by conducting focus groups with 13 carers, 2 PwD and 5 clinicians to iterate the design. We are testing social robot with 3 PwD in their homes for ten weeks. RESULT: We received positive feedback on social robot from focus group participants. Ease of use, low maintenance, accessibility, assistance with medication, supporting with health and wellbeing were identified as the key opportunities for social robots. Based on responses to a daily questionnaire, our robots generate a report detailing PwD wellbeing that is automatically sent via email to family members or carers. This information is also stored systematically in a database that can help clinicians monitor their patients remotely. We use natural language processing to analyse conversations and identify topics of interest to PwD such that robot behaviour could be adapted. We process speech using signal processing and machine lear
Caulcrick C, Huo W, Franco E, et al., 2021, Model predictive control for human-centred lower limb robotic assistance, IEEE Transactions on Medical Robotics and Bionics, Vol: 3, Pages: 980-991, ISSN: 2576-3202
Loss of mobility and/or balance resulting from neural trauma is a critical public health issue. Robotic exoskeletons hold great potential for rehabilitation and assisted movement. However, the synergy of robot operation with human effort remains a problem. In particular, optimal assist-as-needed (AAN) control remains unresolved given pathological variance among patients. We introduce a model predictive control (MPC) architecture for lower limb exoskeletons that achieves on-the-fly transitions between modes of assistance. The architecture implements a fuzzy logic algorithm (FLA) to map key modes of assistance based on human involvement. Three modes are utilised: passive, for human relaxed and robot dominant; active-assist, for human cooperation with the task; and safety, in the case of human resistance to the robot. Electromyography (EMG) signals are further employed to predict the human torque. EMG output is used by the MPC for trajectory following and by the FLA for decision making. Experimental validation using a 1-DOF knee exoskeleton demonstrates the controller tracking a sinusoidal trajectory with relaxed, assistive, and resistive operational modes. Results demonstrate rapid and appropriate transfers among the assistance modes, and satisfactory AAN performance in each case, offering a new level of human-robot synergy for mobility assist and rehabilitation.
Huo W, Caulcrick C, Hoult W, et al., 2021, Human joint torque modelling with mmg and emg during lower limb human-exoskeleton interaction, IEEE Robotics and Automation Letters, Vol: 6, Pages: 7185-7192, ISSN: 2377-3766
Human-robot cooperation is vital for optimising powered assist of lower limb exoskeletons (LLEs). Robotic capacity to intelligently adapt to human force, however, demands a fusion of data from exoskeleton and user state for smooth human-robot synergy. Muscle activity, mapped through electromyography (EMG) or mechanomyography (MMG) is widely acknowledged as usable sensor input that precedes the onset of human joint torque. However, competing and complementary information between such physiological feedback is yet to be exploited, or even assessed, for predictive LLE control. We investigate complementary and competing benefits of EMG and MMG sensing modalities as a means of calculating human torque input for assist-as-needed (AAN) LLE control. Three biomechanically agnostic machine learning approaches, linear regression, polynomial regression, and neural networks, are implemented for joint torque prediction during human-exoskeleton interaction experiments. Results demonstrate MMG predicts human joint torque with slightly lower accuracy than EMG for isometric human-exoskeleton interaction. Performance is comparable for dynamic exercise. Neural network models achieve the best performance for both MMG and EMG (94.8 ± 0.7% with MMG and 97.6 ± 0.8% with EMG (Mean ± SD)) at the expense of training time and implementation complexity. This investigation represents the first MMG human joint torque models for LLEs and their first comparison with EMG. We provide our implementations for future investigations ( https://github.com/cic12/ieee_appx ).
Huo W, Moon H, Alouane MA, et al., 2021, Impedance modulation control of a lower limb exoskeleton to assist sit-to-stand movements, IEEE Transactions on Robotics, Vol: 38, Pages: 1230-1249, ISSN: 1552-3098
As an important movement of the daily living activities, sit-to-stand (STS) movement is usually a difficult task facingelderly and dependent people. In this article, a novel impedancemodulation strategy of a lower limb exoskeleton is proposed toprovide appropriate power and balance assistance during STSmovements while preserving the wearer’s control priority. Theimpedance modulation control strategy ensures adaptation of themechanical impedance of the human-exoskeleton system towardsa desired one requiring less wearer’s effect while reinforcing thewearer’s balance control ability during STS movements. A humanjoint torque observer is designed to estimate the joint torquesdeveloped by the wearer using joint position kinematics instead ofelectromyography (EMG) or force sensors; a time-varying desiredimpedance model is proposed according to the wearer’s lowerlimb motion ability. A virtual environmental force is designedfor the balance reinforcement control. Stability and robustness ofthe proposed method are theoretically analyzed. Simulations wereimplemented to illustrate the characteristics and performance ofthe proposed approach. Experiments with four healthy subjectswere carried out to evaluate the effectiveness of the proposedmethod and show satisfactory results in terms of appropriatepower assist and balance reinforcement.
Formstone L, Huo W, Wilson S, et al., 2021, Quantification of motor function post-stroke using wearable inertial and ,echanomyographic Sensors, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 1158-1167, ISSN: 1534-4320
Subjective clinical rating scales represent the goldstandard diagnosis of motor function following stroke, however in practice they suffer from well-recognised limitations including variance between assessors, low inter-rater reliability and low resolution. Automated systems have been proposed for empirical quantification but have significantly impacted clinical practice. We address translational challenges in this arena through: (1) implementation of a novel sensor suite fusing inertial measurement and mechanomyography (MMG) to quantify hand and wrist motor function; and (2) introduction of a new range of signal features extracted from the suite to supplement predicted clinical scores. The wearable sensors, signal features, and sensor fusion algorithms have been combined to produce classified ratings from the Fugl-Meyer clinical assessment rating scale. Furthermore, we have designed the system to augment clinical rating with several sensor-derived supplementary features encompassing critical aspects of motor dysfunction (e.g. joint angle, muscle activity, etc.). Performance is validated through a large-scale study on a poststroke cohort of 64 patients. Fugl-Meyer Assessment tasks were classified with 75% accuracy for gross motor tasks and 62% for hand/wrist motor tasks. Of greater import, supplementary features demonstrated concurrent validity with Fugl-Meyer ratings, evidencing their utility as new measures of motor function suited to automated assessment. Finally, the supplementary features also provide continuous measures of sub-components of motor function, offering the potential to complement low accuracy but well-validated clinical rating scales when high-quality motor outcome measures are required. We believe this work provides a basis for widespread clinical adoption of inertial-MMG sensor use for post-stroke clinical motor assessment.Index Terms—Stroke, Fugl-Meyer assessment, automated upper-limb assessment, wearables, machine learning, mechanomyogra
Russell F, Takeda Y, Kormushev P, et al., 2021, Stiffness modulation in a humanoid robotic leg and knee, IEEE Robotics and Automation Letters, Vol: 6, Pages: 2563-2570, ISSN: 2377-3766
Stiffness modulation in walking is critical to maintain static/dynamic stability as well as minimize energy consumption and impact damage. However, optimal, or even functional, stiffness parameterization remains unresolved in legged robotics.We introduce an architecture for stiffness control utilizing a bioinspired robotic limb consisting of a condylar knee joint and leg with antagonistic actuation. The joint replicates elastic ligaments of the human knee providing tuneable compliance for walking. It further locks out at maximum extension, providing stability when standing. Compliance and friction losses between joint surfaces are derived as a function of ligament stiffness and length. Experimental studies validate utility through quantification of: 1) hip perturbation response; 2) payload capacity; and 3) static stiffness of the leg mechanism.Results prove initiation and compliance at lock out can be modulated independently of friction loss by changing ligament elasticity. Furthermore, increasing co-contraction or decreasing joint angle enables increased leg stiffness, which establishes co-contraction is counterbalanced by decreased payload.Findings have direct application in legged robots and transfemoral prosthetic knees, where biorobotic design could reduce energy expense while improving efficiency and stability. Future targeted impact involves increasing power/weight ratios in walking robots and artificial limbs for increased efficiency and precision in walking control.
Raposo de Lima M, Wairagkar M, Natarajan N, et al., 2021, Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement, Frontiers in Robotics and AI, Vol: 8, ISSN: 2296-9144
COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to sup
Wattanasiri P, Wilson S, Huo W, et al., 2021, Adaptive Mechanomyogram Hand Gesture Recognition in Online and Repeatable Environment, 17th IEEE International Conference on Automation Science and Engineering (CASE), Publisher: IEEE, Pages: 2315-2321, ISSN: 2161-8070
Mancero Castillo C, Wilson S, Vaidyanathan R, et al., 2021, Wearable MMG-plus-one armband: evaluation of normal force on mechanomyography (MMG) to enhance human-machine interfacing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 196-205, ISSN: 1534-4320
In this paper, we introduce a new mode of mechanomyography (MMG) signal capture for enhancing the performance of human-machine interfaces (HMIs) through modulation of normal pressure at the sensor location. Utilizing this novel approach, increased MMG signal resolution is enabled by a tunable degree of freedom normal to the sensor-skin contact area. We detail the mechatronic design, experimental validation, and user study of an armband with embedded acoustic sensors demonstrating this capacity. The design is motivated by the nonlinear viscoelasticity of the tissue, which increases with the normal surface pressure. This, in theory, results in higher conductivity of mechanical waves and hypothetically allows to interface with deeper muscle; thus, enhancing the discriminative information context of the signal space. Ten subjects (seven able-bodied and three trans-radial amputees) participated in a study consisting of the classification of hand gestures through MMG while increasing levels of contact force were administered. Four MMG channels were positioned around the forearm and placed over the flexor carpi radialis, brachioradialis, extensor digitorum communis, and flexor carpi ulnaris muscles. A total of 852 spectrotemporal features were extracted (213 features per each channel) and passed through a Neighborhood Component Analysis (NCA) technique to select the most informative neurophysiological subspace of the features for classification. A linear support vector machine (SVM) then classified the intended motion of the user. The results indicate that increasing the normal force level between the MMG sensor and the skin can improve the discriminative power of the classifier, and the corresponding pattern can be user-specific. These results have significant implications enabling embedding MMG sensors in sockets for prosthetic limb control and HMI.
Mashrur FR, Miya MTI, Rawnaque FS, et al., 2021, MarketBrain: An EEG based intelligent consumer preference prediction system, 2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), Pages: 808-811, ISSN: 1557-170X
Ghosh AK, Balasubramanian S, Devasahayam S, et al., 2020, Detection and Analysis of Fetal Movements Using an Acoustic Sensor-based Wearable Monitor, Pages: 512-516
Monitoring of fetal movements (FM) is considered an important part of fetal well-being assessment due to its association with several fetal health conditions, e.g. fetal distress, fetal growth restriction, hypoxia, etc. However, the current standard methods of FM quantification, e.g. ultrasonography, MRI, and cardiotocography, are limited to their use in clinical environments. In this paper, we evaluate the performance of an acoustic sensor-based, cheap, wearable FM monitor that can be used by pregnant women at home. For data analysis, we develop a thresholding-based signal processing algorithm that fuses outputs from all the sensors to detect FM automatically. Obtained results demonstrate the promising performance of the system with a sensitivity, specificity, and accuracy of 83.3%, 87.8%, and 87.1%, respectively, relative to the maternal sensation of FM. Finally, a spike-like morphology of acoustic signals corresponding to true detected movements is found in the time-frequency domain through spectrogram analysis, which is expected to be useful for developing a more advanced signal processing algorithm to further improve the accuracy of detection.
Gardner M, Mancero Castillo C, Wilson S, et al., 2020, A multimodal intention detection sensor suite for shared autonomy of upper-limb robotic prostheses, Sensors, Vol: 20, ISSN: 1424-8220
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design.
Ghosh AK, Burniston SF, Krentzel D, et al., 2020, A novel fetal movement simulator for the performance evaluation of vibration Sensors for wearable fetal movement monitors, Sensors, Vol: 20, ISSN: 1424-8220
Fetal movements (FM) are an important factor in the assessment of fetal health. However, there is currently no reliable way to monitor FM outside clinical environs. While extensive research has been carried out using accelerometer-based systems to monitor FM, the desired accuracy of detection is yet to be achieved. A major challenge has been the difficulty of testing and calibrating sensors at the pre-clinical stage. Little is known about fetal movement features, and clinical trials involving pregnant women can be expensive and ethically stringent. To address these issues, we introduce a novel FM simulator, which can be used to test responses of sensor arrays in a laboratory environment. The design uses a silicon-based membrane with material properties similar to that of a gravid abdomen to mimic the vibrations due to fetal kicks. The simulator incorporates mechanisms to pre-stretch the membrane and to produce kicks similar to that of a fetus. As a case study, we present results from a comparative study of an acoustic sensor, an accelerometer, and a piezoelectric diaphragm as candidate vibration sensors for a wearable FM monitor. We find that the acoustic sensor and the piezoelectric diaphragm are better equipped than the accelerometer to determine durations, intensities, and locations of kicks, as they have a significantly greater response to changes in these conditions than the accelerometer. Additionally, we demonstrate that the acoustic sensor and the piezoelectric diaphragm can detect weaker fetal movements (threshold wall displacements are less than 0.5 mm) compared to the accelerometer (threshold wall displacement is 1.5 mm) with a trade-off of higher power signal artefacts. Finally, we find that the piezoelectric diaphragm produces better signal-to-noise ratios compared to the other two sensors in most of the cases, making it a promising new candidate sensor for wearable FM monitors. We believe that the FM simulator represents a key development towards enabl
Sajal MSR, Ehsan MT, Vaidyanathan R, et al., 2020, Telemonitoring Parkinson's disease using machine learning by combining tremor and voice analysis, Brain Inform, Vol: 7, ISSN: 2198-4018
BACKGROUND: With the growing number of the aged population, the number of Parkinson's disease (PD) affected people is also mounting. Unfortunately, due to insufficient resources and awareness in underdeveloped countries, proper and timely PD detection is highly challenged. Besides, all PD patients' symptoms are neither the same nor they all become pronounced at the same stage of the illness. Therefore, this work aims to combine more than one symptom (rest tremor and voice degradation) by collecting data remotely using smartphones and detect PD with the help of a cloud-based machine learning system for telemonitoring the PD patients in the developing countries. METHOD: This proposed system receives rest tremor and vowel phonation data acquired by smartphones with built-in accelerometer and voice recorder sensors. The data are primarily collected from diagnosed PD patients and healthy people for building and optimizing machine learning models that exhibit higher performance. After that, data from newly suspected PD patients are collected, and the trained algorithms are evaluated to detect PD. Based on the majority-vote from those algorithms, PD-detected patients are connected with a nearby neurologist for consultation. Upon receiving patients' feedback after being diagnosed by the neurologist, the system may update the model by retraining using the latest data. Also, the system requests the detected patients periodically to upload new data to track their disease progress. RESULT: The highest accuracy in PD detection using offline data was [Formula: see text] from voice data and [Formula: see text] from tremor data when used separately. In both cases, k-nearest neighbors (kNN) gave the highest accuracy over support vector machine (SVM) and naive Bayes (NB). The application of maximum relevance minimum redundancy (MRMR) feature selection method showed that by selecting different feature sets based on the patient's gender, we could improve the detection accuracy. This st
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.