160 results found
Huo W, Caulcrick C, Hoult W, et al., 2021, Human joint torque modelling with mmg and emg during lower limb human-exoskeleton interaction, IEEE Robotics and Automation Letters, Vol: 6, Pages: 7185-7192, ISSN: 2377-3766
Human-robot cooperation is vital for optimising powered assist of lower limb exoskeletons (LLEs). Robotic capacity to intelligently adapt to human force, however, demands a fusion of data from exoskeleton and user state for smooth human-robot synergy. Muscle activity, mapped through electromyography (EMG) or mechanomyography (MMG) is widely acknowledged as usable sensor input that precedes the onset of human joint torque. However, competing and complementary information between such physiological feedback is yet to be exploited, or even assessed, for predictive LLE control. We investigate complementary and competing benefits of EMG and MMG sensing modalities as a means of calculating human torque input for assist-as-needed (AAN) LLE control. Three biomechanically agnostic machine learning approaches, linear regression, polynomial regression, and neural networks, are implemented for joint torque prediction during human-exoskeleton interaction experiments. Results demonstrate MMG predicts human joint torque with slightly lower accuracy than EMG for isometric human-exoskeleton interaction. Performance is comparable for dynamic exercise. Neural network models achieve the best performance for both MMG and EMG (94.8 ± 0.7% with MMG and 97.6 ± 0.8% with EMG (Mean ± SD)) at the expense of training time and implementation complexity. This investigation represents the first MMG human joint torque models for LLEs and their first comparison with EMG. We provide our implementations for future investigations ( https://github.com/cic12/ieee_appx ).
Lima MR, Wairagkar M, Gupta M, et al., 2021, Conversational affective social robots for ageing and dementia support, IEEE Transactions on Cognitive and Developmental Systems, ISSN: 2379-8920
Socially assistive robots (SAR) hold significant potential to assist older adults and people with dementia in human engagement and clinical contexts by supporting mental health and independence at home. While SAR research has recently experienced prolific growth, long-term trust, clinical translation and patient benefit remain immature. Affective human-robot interactions are unresolved and the deployment of robots with conversational abilities is fundamental for robustness and humanrobot engagement. In this paper, we review the state of the art within the past two decades, design trends, and current applications of conversational affective SAR for ageing and dementia support. A horizon scanning of AI voice technology for healthcare, including ubiquitous smart speakers, is further introduced to address current gaps inhibiting home use. We discuss the role of user-centred approaches in the design of voice systems, including the capacity to handle communication breakdowns for effective use by target populations. We summarise the state of development in interactions using speech and natural language processing, which forms a baseline for longitudinal health monitoring and cognitive assessment. Drawing from this foundation, we identify open challenges and propose future directions to advance conversational affective social robots for: 1) user engagement, 2) deployment in real-world settings, and 3) clinical translation.
Wattanasiri P, Wilson S, Huo W, et al., 2021, Adaptive Mechanomyogram Hand Gesture Recognition in Online and Repeatable Environment, Pages: 2315-2321, ISSN: 2161-8070
We introduce a complete architecture for realtime hand gesture recognition for human-computer interface (HCI) and robotic control. The system addresses ease of use, calibration, and robustness issues which have inhibited gesture recognition wearables in the field. Our system is packaged as a generic (non-customized) arm wearable that integrates: 1) a novel mechanomyogram (MMG) sensing suite; 2) an integrated inertial measurement unit (IMU); 3) accompanying data acquisition and transmission hardware; and 4) real-time signal recognition algorithms to run on the receiving peripheral (e.g. computer, robot, etc.). We implement a rapid training routine capable of grasp pattern identification from small samples (20 per gesture) with less than 5-minute calibration time, which yields immediate real-time accuracies of 84% in amputees (3 gestures) and 89% in non-amputees (5 gestures), with the capacity to scale as users become more comfortable (accurate) with generated gestures. In repeated (5-day) usage with regular donning and doffing of the armband, 89%-91% accuracy is achieved with non-amputees using data over the previous days for reparameterization. Findings demonstrate the capacity to adapt to new able-bodied and amputee subjects with a generic armband and small training datasets, adapt as user proficiency increases, and provide consistent prediction for regular long-term use.
Wairagkar M, Lima MR, Bazo D, et al., 2021, Emotive response to a hybrid-face robot and translation to consumer social robots, IEEE Internet of Things Journal, ISSN: 2327-4662
We present the conceptual formulation, design, fabrication, control and commercial translation of an IoT enabled social robot as mapped through validation of human emotional response to its affective interactions. The robot design centres on a humanoid hybrid-face that integrates a rigid faceplate with a digital display to simplify conveyance of complex facial movements while providing the impression of three-dimensional depth. We map the emotions of the robot to specific facial feature parameters, characterise recognisability of archetypical facial expressions, and introduce pupil dilation as an additional degree of freedom for emotion conveyance. Human interaction experiments demonstrate the ability to effectively convey emotion from the hybrid-robot face to humans. Conveyance is quantified by studying neurophysiological electroencephalography (EEG) response to perceived emotional information as well as through qualitative interviews. Results demonstrate core hybrid-face robotic expressions can be discriminated by humans (80%+ recognition) and invoke face-sensitive neurophysiological event-related potentials such as N170 and Vertex Positive Potentials in EEG. The hybrid-face robot concept has been modified, implemented, and released by Emotix Inc in the commercial IoT robotic platform Miko (‘My Companion’), an affective robot currently in use for human-robot interaction with children. We demonstrate that human EEG responses to Miko emotions are comparative to that of the hybrid-face robot validating design modifications implemented for large scale distribution. Finally, interviews show above 90% expression recognition rates in our commercial robot. We conclude that simplified hybrid-face abstraction conveys emotions effectively and enhances human-robot interaction.
Caulcrick C, Huo W, Franco E, et al., 2021, Model Predictive Control for Human-Centred Lower Limb Robotic Assistance, IEEE Robotics and Automation Letters, ISSN: 2377-3766
Loss of mobility or balance resulting from neural trauma is a criticalconsideration in public health. Robotic exoskeletons hold great potential forrehabilitation and assisted movement, yet optimal assist-as-needed (AAN)control remains unresolved given pathological variance among patients. Weintroduce a model predictive control (MPC) architecture for lower limbexoskeletons centred around a fuzzy logic algorithm (FLA) identifying modes ofassistance based on human involvement. Assistance modes are: 1) passive forhuman relaxed and robot dominant, 2) active-assist for human cooperation withthe task, and 3) safety in the case of human resistance to the robot. Humantorque is estimated from electromyography (EMG) signals prior to joint motions,enabling advanced prediction of torque by the MPC and selection of assistancemode by the FLA. The controller is demonstrated in hardware with three subjectson a 1-DOF knee exoskeleton tracking a sinusoidal trajectory with human relaxedassistive, and resistive. Experimental results show quick and appropriatetransfers among the assistance modes and satisfied assistive performance ineach mode. Results illustrate an objective approach to lower limb roboticassistance through on-the-fly transition between modes of movement, providing anew level of human-robot synergy for mobility assist and rehabilitation.
Huo W, Moon H, Alouane MA, et al., 2021, Impedance modulation control of a lower limb exoskeleton to assist sit-to-stand movements, IEEE Transactions on Robotics, ISSN: 1552-3098
As an important movement of the daily living activities, sit-to-stand (STS) movement is usually a difficult task facingelderly and dependent people. In this article, a novel impedancemodulation strategy of a lower limb exoskeleton is proposed toprovide appropriate power and balance assistance during STSmovements while preserving the wearer’s control priority. Theimpedance modulation control strategy ensures adaptation of themechanical impedance of the human-exoskeleton system towardsa desired one requiring less wearer’s effect while reinforcing thewearer’s balance control ability during STS movements. A humanjoint torque observer is designed to estimate the joint torquesdeveloped by the wearer using joint position kinematics instead ofelectromyography (EMG) or force sensors; a time-varying desiredimpedance model is proposed according to the wearer’s lowerlimb motion ability. A virtual environmental force is designedfor the balance reinforcement control. Stability and robustness ofthe proposed method are theoretically analyzed. Simulations wereimplemented to illustrate the characteristics and performance ofthe proposed approach. Experiments with four healthy subjectswere carried out to evaluate the effectiveness of the proposedmethod and show satisfactory results in terms of appropriatepower assist and balance reinforcement.
Formstone L, Huo W, Wilson S, et al., 2021, Quantification of motor function post-stroke using wearable inertial and ,echanomyographic Sensors, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 1158-1167, ISSN: 1534-4320
Subjective clinical rating scales represent the goldstandard diagnosis of motor function following stroke, however in practice they suffer from well-recognised limitations including variance between assessors, low inter-rater reliability and low resolution. Automated systems have been proposed for empirical quantification but have significantly impacted clinical practice. We address translational challenges in this arena through: (1) implementation of a novel sensor suite fusing inertial measurement and mechanomyography (MMG) to quantify hand and wrist motor function; and (2) introduction of a new range of signal features extracted from the suite to supplement predicted clinical scores. The wearable sensors, signal features, and sensor fusion algorithms have been combined to produce classified ratings from the Fugl-Meyer clinical assessment rating scale. Furthermore, we have designed the system to augment clinical rating with several sensor-derived supplementary features encompassing critical aspects of motor dysfunction (e.g. joint angle, muscle activity, etc.). Performance is validated through a large-scale study on a poststroke cohort of 64 patients. Fugl-Meyer Assessment tasks were classified with 75% accuracy for gross motor tasks and 62% for hand/wrist motor tasks. Of greater import, supplementary features demonstrated concurrent validity with Fugl-Meyer ratings, evidencing their utility as new measures of motor function suited to automated assessment. Finally, the supplementary features also provide continuous measures of sub-components of motor function, offering the potential to complement low accuracy but well-validated clinical rating scales when high-quality motor outcome measures are required. We believe this work provides a basis for widespread clinical adoption of inertial-MMG sensor use for post-stroke clinical motor assessment.Index Terms—Stroke, Fugl-Meyer assessment, automated upper-limb assessment, wearables, machine learning, mechanomyogra
Natarajan N, Vaitheswaran S, Raposo de Lima M, et al., 2021, Acceptability of social robots and adaptation of hybrid-face robot for dementia care in India: a qualitative study, American Journal of Geriatric Psychiatry, ISSN: 1064-7481
ObjectivesThis study aims to understand the acceptability of social robots and the adaptation of the Hybrid-Face Robot for dementia care in India.MethodsWe conducted a focus group discussion and in-depth interviews with persons with dementia (PwD), their caregivers, professionals in the field of dementia, and technical experts in robotics to collect qualitative data.ResultsThis study explored the following themes: Acceptability of Robots in Dementia Care in India, Adaptation of Hybrid-Face Robot and Future of Robots in Dementia Care. Caregivers and PwD were open to the idea of social robot use in dementia care; caregivers perceived it to help with the challenges of caregiving and positively viewed a future with robots.DiscussionThis study is the first of its kind to explore the use of social robots in dementia care in India by highlighting user needs and requirements that determine acceptability and guiding adaptation.
Russell F, Takeda Y, Kormushev P, et al., 2021, Stiffness modulation in a humanoid robotic leg and knee, IEEE Robotics and Automation Letters, Vol: 6, Pages: 2563-2570, ISSN: 2377-3766
Stiffness modulation in walking is critical to maintain static/dynamic stability as well as minimize energy consumption and impact damage. However, optimal, or even functional, stiffness parameterization remains unresolved in legged robotics.We introduce an architecture for stiffness control utilizing a bioinspired robotic limb consisting of a condylar knee joint and leg with antagonistic actuation. The joint replicates elastic ligaments of the human knee providing tuneable compliance for walking. It further locks out at maximum extension, providing stability when standing. Compliance and friction losses between joint surfaces are derived as a function of ligament stiffness and length. Experimental studies validate utility through quantification of: 1) hip perturbation response; 2) payload capacity; and 3) static stiffness of the leg mechanism.Results prove initiation and compliance at lock out can be modulated independently of friction loss by changing ligament elasticity. Furthermore, increasing co-contraction or decreasing joint angle enables increased leg stiffness, which establishes co-contraction is counterbalanced by decreased payload.Findings have direct application in legged robots and transfemoral prosthetic knees, where biorobotic design could reduce energy expense while improving efficiency and stability. Future targeted impact involves increasing power/weight ratios in walking robots and artificial limbs for increased efficiency and precision in walking control.
Raposo de Lima M, Wairagkar M, Natarajan N, et al., 2021, Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement, Frontiers in Robotics and AI, Vol: 8, ISSN: 2296-9144
COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to sup
Mancero Castillo C, Wilson S, Vaidyanathan R, et al., 2021, Wearable MMG-plus-one armband: evaluation of normal force on mechanomyography (MMG) to enhance human-machine interfacing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 196-205, ISSN: 1534-4320
In this paper, we introduce a new mode of mechanomyography (MMG) signal capture for enhancing the performance of human-machine interfaces (HMIs) through modulation of normal pressure at the sensor location. Utilizing this novel approach, increased MMG signal resolution is enabled by a tunable degree of freedom normal to the sensor-skin contact area. We detail the mechatronic design, experimental validation, and user study of an armband with embedded acoustic sensors demonstrating this capacity. The design is motivated by the nonlinear viscoelasticity of the tissue, which increases with the normal surface pressure. This, in theory, results in higher conductivity of mechanical waves and hypothetically allows to interface with deeper muscle; thus, enhancing the discriminative information context of the signal space. Ten subjects (seven able-bodied and three trans-radial amputees) participated in a study consisting of the classification of hand gestures through MMG while increasing levels of contact force were administered. Four MMG channels were positioned around the forearm and placed over the flexor carpi radialis, brachioradialis, extensor digitorum communis, and flexor carpi ulnaris muscles. A total of 852 spectrotemporal features were extracted (213 features per each channel) and passed through a Neighborhood Component Analysis (NCA) technique to select the most informative neurophysiological subspace of the features for classification. A linear support vector machine (SVM) then classified the intended motion of the user. The results indicate that increasing the normal force level between the MMG sensor and the skin can improve the discriminative power of the classifier, and the corresponding pattern can be user-specific. These results have significant implications enabling embedding MMG sensors in sockets for prosthetic limb control and HMI.
Ghosh AK, Balasubramanian S, Devasahayam S, et al., 2020, Detection and Analysis of Fetal Movements Using an Acoustic Sensor-based Wearable Monitor, Pages: 512-516
Monitoring of fetal movements (FM) is considered an important part of fetal well-being assessment due to its association with several fetal health conditions, e.g. fetal distress, fetal growth restriction, hypoxia, etc. However, the current standard methods of FM quantification, e.g. ultrasonography, MRI, and cardiotocography, are limited to their use in clinical environments. In this paper, we evaluate the performance of an acoustic sensor-based, cheap, wearable FM monitor that can be used by pregnant women at home. For data analysis, we develop a thresholding-based signal processing algorithm that fuses outputs from all the sensors to detect FM automatically. Obtained results demonstrate the promising performance of the system with a sensitivity, specificity, and accuracy of 83.3%, 87.8%, and 87.1%, respectively, relative to the maternal sensation of FM. Finally, a spike-like morphology of acoustic signals corresponding to true detected movements is found in the time-frequency domain through spectrogram analysis, which is expected to be useful for developing a more advanced signal processing algorithm to further improve the accuracy of detection.
Gardner M, Mancero Castillo C, Wilson S, et al., 2020, A multimodal intention detection sensor suite for shared autonomy of upper-limb robotic prostheses, Sensors, Vol: 20, ISSN: 1424-8220
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design.
Ghosh AK, Burniston SF, Krentzel D, et al., 2020, A novel fetal movement simulator for the performance evaluation of vibration Sensors for wearable fetal movement monitors, Sensors, Vol: 20, ISSN: 1424-8220
Fetal movements (FM) are an important factor in the assessment of fetal health. However, there is currently no reliable way to monitor FM outside clinical environs. While extensive research has been carried out using accelerometer-based systems to monitor FM, the desired accuracy of detection is yet to be achieved. A major challenge has been the difficulty of testing and calibrating sensors at the pre-clinical stage. Little is known about fetal movement features, and clinical trials involving pregnant women can be expensive and ethically stringent. To address these issues, we introduce a novel FM simulator, which can be used to test responses of sensor arrays in a laboratory environment. The design uses a silicon-based membrane with material properties similar to that of a gravid abdomen to mimic the vibrations due to fetal kicks. The simulator incorporates mechanisms to pre-stretch the membrane and to produce kicks similar to that of a fetus. As a case study, we present results from a comparative study of an acoustic sensor, an accelerometer, and a piezoelectric diaphragm as candidate vibration sensors for a wearable FM monitor. We find that the acoustic sensor and the piezoelectric diaphragm are better equipped than the accelerometer to determine durations, intensities, and locations of kicks, as they have a significantly greater response to changes in these conditions than the accelerometer. Additionally, we demonstrate that the acoustic sensor and the piezoelectric diaphragm can detect weaker fetal movements (threshold wall displacements are less than 0.5 mm) compared to the accelerometer (threshold wall displacement is 1.5 mm) with a trade-off of higher power signal artefacts. Finally, we find that the piezoelectric diaphragm produces better signal-to-noise ratios compared to the other two sensors in most of the cases, making it a promising new candidate sensor for wearable FM monitors. We believe that the FM simulator represents a key development towards enabl
Sajal MSR, Ehsan MT, Vaidyanathan R, et al., 2020, Telemonitoring Parkinson's disease using machine learning by combining tremor and voice analysis, Brain Inform, Vol: 7, ISSN: 2198-4018
BACKGROUND: With the growing number of the aged population, the number of Parkinson's disease (PD) affected people is also mounting. Unfortunately, due to insufficient resources and awareness in underdeveloped countries, proper and timely PD detection is highly challenged. Besides, all PD patients' symptoms are neither the same nor they all become pronounced at the same stage of the illness. Therefore, this work aims to combine more than one symptom (rest tremor and voice degradation) by collecting data remotely using smartphones and detect PD with the help of a cloud-based machine learning system for telemonitoring the PD patients in the developing countries. METHOD: This proposed system receives rest tremor and vowel phonation data acquired by smartphones with built-in accelerometer and voice recorder sensors. The data are primarily collected from diagnosed PD patients and healthy people for building and optimizing machine learning models that exhibit higher performance. After that, data from newly suspected PD patients are collected, and the trained algorithms are evaluated to detect PD. Based on the majority-vote from those algorithms, PD-detected patients are connected with a nearby neurologist for consultation. Upon receiving patients' feedback after being diagnosed by the neurologist, the system may update the model by retraining using the latest data. Also, the system requests the detected patients periodically to upload new data to track their disease progress. RESULT: The highest accuracy in PD detection using offline data was [Formula: see text] from voice data and [Formula: see text] from tremor data when used separately. In both cases, k-nearest neighbors (kNN) gave the highest accuracy over support vector machine (SVM) and naive Bayes (NB). The application of maximum relevance minimum redundancy (MRMR) feature selection method showed that by selecting different feature sets based on the patient's gender, we could improve the detection accuracy. This st
Russell F, Kormushev P, Vaidyanathan R, et al., 2020, The impact of ACL laxity on a bicondylar robotic knee and implications in human joint biomechanics, IEEE Transactions on Biomedical Engineering, Vol: 67, Pages: 2817-2827, ISSN: 0018-9294
Objective: Elucidating the role of structural mechanisms in the knee can improve joint surgeries, rehabilitation, and understanding of biped locomotion. Identification of key features, however, is challenging due to limitations in simulation and in-vivo studies. In particular the coupling of the patello-femoral and tibio-femoral joints with ligaments and its impact on joint mechanics and movement is not understood. We investigate this coupling experimentally through the design and testing of a robotic sagittal plane model. Methods: We constructed a sagittal plane robot comprised of: 1) elastic links representing cruciate ligaments; 2) a bi-condylar joint; 3) a patella; and 4) actuator hamstrings and quadriceps. Stiffness and geometry were derived from anthropometric data. 10° - 110° squatting tests were executed at speeds of 0.1 - 0.25Hz over a range of anterior cruciate ligament (ACL) slack lengths. Results: Increasing ACL length compromised joint stability, yet did not impact quadriceps mechanical advantage and force required for squat. The trend was consistent through varying condyle contact point and ligament force changes. Conclusion: The geometry of the condyles allows the ratio of quadriceps to patella tendon force to compensate for contact point changes imparted by the removal of the ACL. Thus the system maintains a constant mechanical advantage. Significance: The investigation uncovers critical features of human knee biomechanics. Findings contribute to understanding of knee ligament damage, inform procedures for knee surgery and orthopaedic implant design, and support design of trans-femoral prosthetics and walking robots. Results further demonstrate the utility of robotics as a powerful means of studying human joint biomechanics.
Masen MA, Chung A, Dawczyk JU, et al., 2020, Evaluating lubricant performance to reduce COVID-19 PPE-related skin injury, PLoS One, Vol: 15, Pages: e0239363-e0239363, ISSN: 1932-6203
BackgroundHealthcare workers around the world are experiencing skin injury due to the extended use of personal protective equipment (PPE) during the COVID-19 pandemic. These injuries are the result of high shear stresses acting on the skin, caused by friction with the PPE. This study aims to provide a practical lubricating solution for frontline medical staff working a 4+ hours shift wearing PPE.MethodsA literature review into skin friction and skin lubrication was conducted to identify products and substances that can reduce friction. We evaluated the lubricating performance of commercially available products in vivo using a custom-built tribometer.FindingsMost lubricants provide a strong initial friction reduction, but only few products provide lubrication that lasts for four hours. The response of skin to friction is a complex interplay between the lubricating properties and durability of the film deposited on the surface and the response of skin to the lubricating substance, which include epidermal absorption, occlusion, and water retention.InterpretationTalcum powder, a petrolatum-lanolin mixture, and a coconut oil-cocoa butter-beeswax mixture showed excellent long-lasting low friction. Moisturising the skin results in excessive friction, and the use of products that are aimed at ‘moisturising without leaving a non-greasy feel’ should be prevented. Most investigated dressings also demonstrate excellent performance.
Rawnaque FS, Rahman KM, Anwar SF, et al., 2020, Technological advancements and opportunities in Neuromarketing: a systematic review, Brain Informatics, Vol: 7, ISSN: 2198-4018
Neuromarketing has become an academic and commercial area of interest, as the advancements in neural recording techniques and interpreting algorithms have made it an effective tool for recognizing the unspoken response of consumers to the marketing stimuli. This article presents the very first systematic review of the technological advancements in Neuromarketing field over the last 5 years. For this purpose, authors have selected and reviewed a total of 57 relevant literatures from valid databases which directly contribute to the Neuromarketing field with basic or empirical research findings. This review finds consumer goods as the prevalent marketing stimuli used in both product and promotion forms in these selected literatures. A trend of analyzing frontal and prefrontal alpha band signals is observed among the consumer emotion recognition-based experiments, which corresponds to frontal alpha asymmetry theory. The use of electroencephalogram (EEG) is found favorable by many researchers over functional magnetic resonance imaging (fMRI) in video advertisement-based Neuromarketing experiments, apparently due to its low cost and high time resolution advantages. Physiological response measuring techniques such as eye tracking, skin conductance recording, heart rate monitoring, and facial mapping have also been found in these empirical studies exclusively or in parallel with brain recordings. Alongside traditional filtering methods, independent component analysis (ICA) was found most commonly in artifact removal from neural signal. In consumer response prediction and classification, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) have performed with the highest average accuracy among other machine learning algorithms used in these literatures. The authors hope, this review will assist the future researchers with vital information in the field of Neuromarketing for making novel contributions.
Morad S, Ulbricht C, Harkin P, et al., 2020, Surgical Robot Platform with a Novel Concentric Joint for Minimally Invasive Procedures, Journal of Medical Robotics Research, Vol: 05, Pages: 2050001-2050001, ISSN: 2424-905X
<jats:p> In this paper, a surgical robot platform with a novel concentric connector joint (CCJ) is presented. The surgical robot is a parallel robot platform comprised of multiple struts, arranged in a geometrically stable array, connected at their end points via the CCJ. The CCJ joints have near-perfect concentricity of rotation around the node point, which enables the tension and compression forces of the struts to be resolved in a structurally-efficient manner. The preliminary feasibility tests, modeling and simulations were introduced. </jats:p>
Madgwick SOH, Wilson S, Turk R, et al., 2020, An extended complementary filter (ECF) for full-body MARG orientation estimation, IEEE/ASME Transactions on Mechatronics, Vol: 25, Pages: 2054-2064, ISSN: 1083-4435
Inertial sensing suites now permeate all forms of smart automation, yet a plateau exists in real-world derivation of global orientation. Magnetic field fluctuations and inefficient sensor fusion still inhibit deployment. We introduce a new algorithm, an Extended Complementary Filter (ECF), to derive 3D rigid body orientation from inertial sensing suites addressing these challenges. The ECF combines computational efficiency of classic complementary filters with improved accuracy compared to popular optimization filters. We present a complete formulation of the algorithm, including an extension to address the challenge of orientation accuracy in the presence of fluctuating magnetic fields. Performance is tested under a variety of conditions and benchmarked against the commonly used gradient decent (GDA) inertial sensor fusion algorithm. Results demonstrate improved efficiency, with the ECF achieving convergence 30% faster than standard alternatives. We further demonstrate an improved robustness to sources of magnetic interference in pitch and roll and to fast changes of orientation in the yaw direction. The ECF has been implemented at the core of a wearable rehabilitation system tracking movement of stroke patients for home telehealth. The ECF and accompanying magnetic disturbance rejection algorithm enables previously unachievable real-time patient movement feedback in the form of a full virtual human (avatar), even in the presence of magnetic disturbance. Algorithm efficiency and accuracy have also spawned an entire commercial product line released by the company x-io. We believe the ECF and accompanying magnetic disturbance routines are key enablers for future widespread use of wearable systems with the capacity for global orientation tracking
Lai J, Nowlan NC, Vaidyanathan R, et al., 2020, The use of actograph in the assessment of fetal well-being, Journal of Maternal-Fetal and Neonatal Medicine, Vol: 33, Pages: 2116-2121, ISSN: 1476-4954
PURPOSE: Third trimester maternal perception of fetal movements is often used to assess fetal well-being. However, its true clinical value is unknown, primarily because of the variability in subjective quantification. The actograph, a technology available on most cardiotocograph machines, quantifies movements, but has never previously been investigated in relation to fetal health and existing monitoring devices. The objective of this study was to quantify actograph output in healthy third trimester pregnancies and investigate this in relation to other methods of assessing fetal well-being. METHODS: Forty-two women between 24 and 34 weeks of gestation underwent ultrasound scan followed by a computerized cardiotocograph (CTG). Post capture analysis of the actograph recording was performed and expressed as a percentage of activity over time. The actograph output results were analyzed in relation to Doppler, ultrasound and CTG findings expressed as z-score normalized for gestation. RESULTS: There was a significant association between actograph output recording and estimated fetal weight Z-score (R = 0.546, p ≤ .005). This activity was not related to estimated fetal weight. Increased actograph activity was negatively correlated with umbilical artery pulsatility index Z-score (R = -0.306, p = .049) and middle cerebral artery pulsatility index Z-score (R = -0.390, p = .011). CONCLUSION: Fetal movements assessed by the actograph are associated both with fetal size in relation to gestation and fetoplacental Doppler parameters. It is not the case that larger babies move more, however, as the relationship with actograph output related only to estimated fetal weight z-score. These findings suggest a plausible link between the frequency of fetal movements and established markers of fetal health. RATIONALE The objective of this study was to quantify actograph output in healthy third trimester pregnancies and investigate this in relation to other methods of assess
Huo W, Angeles P, Tai YF, et al., 2020, A heterogeneous sensing suite for multisymptom quantification of Parkinson’s disease, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 28, Pages: 1397-1406, ISSN: 1534-4320
Parkinson’s disease (PD) is the second most common neurodegenerative disease affecting millions worldwide. Bespoke subject-specific treatment (medication or deep brain stimulation (DBS)) is critical for management, yet depends on precise assessment cardinal PD symptoms - bradykinesia, rigidity and tremor. Clinician diagnosis is the basis of treatment, yet it allows only a cross-sectional assessment of symptoms which can vary on an hourly basis and is liable to inter- and intra-rater subjectivity across human examiners. Automated symptomatic assessment has attracted significant interest to optimise treatment regimens between clinician visits, however, no wearable has the capacity to simultaneously assess all three cardinal symptoms. Challenges in the measurement of rigidity, mapping muscle activity outof-clinic and sensor fusion have inhibited translation. In this study, we address all through a novel wearable sensor system and learning algorithms. The sensor system is composed of a force-sensor, two inertial measurement units (IMUs) and four custom mechanomyography (MMG) sensors. The system was tested in its capacity to predict Unified Parkinson’s Disease Rating Scale (UPDRS) scores based on quantitative assessment of bradykinesia, rigidity and tremor in PD patients. 23 PD patients were tested with the sensor system in parallel with exams conducted by treating clinicians and 10 healthy subjects were recruited as a comparison control group. Results prove the system accurately predicts UPDRS scores for all symptoms (85.4% match on average with physician assessment) and discriminates between healthy subjects and PD patients (96.6% on average). MMG features can also be used for remote monitoring of severity and fluctuations in PD symptoms out-of-clinic. This closedloop feedback system enables individually tailored and regularly updated treatment, facilitating better outcomes for a very large patient population.
Meagher C, Franco E, Turk R, et al., 2020, New advances in mechanomyography sensor technology and signal processing: validity and intrarater reliability of recordings from muscle, Journal of Rehabilitation and Assistive Technologies Engineering, Vol: 7, ISSN: 2055-6683
IntroductionThe Mechanical Muscle Activity with Real-time Kinematics project aims to develop a device incorporating wearable sensors for arm rehabilitation following stroke. These will record kinematic activity using inertial measurement units and mechanical muscle activity. The gold standard for measuring muscle activity is electromyography; however, mechanomyography offers an appropriate alterative for our home-based rehabilitation device. We have patent filed a new laboratory-tested device that combines an inertial measurement unit with mechanomyography. We report on the validity and reliability of the mechanomyography against electromyography sensors.MethodsIn 18 healthy adults (27–82 years), mechanomyography and electromyography recordings were taken from the forearm flexor and extensor muscles during voluntary contractions. Isometric contractions were performed at different percentages of maximal force to examine the validity of mechanomyography. Root-mean-square of mechanomyography and electromyography was measured during 1 s epocs of isometric flexion and extension. Dynamic contractions were recorded during a tracking task on two days, one week apart, to examine reliability of muscle onset timing.ResultsReliability of mechanomyography onset was high (intraclass correlation coefficient = 0.78) and was comparable with electromyography (intraclass correlation coefficient = 0.79). The correlation between force and mechanomyography was high (R2 = 0.94).ConclusionThe mechanomyography device records valid and reliable signals of mechanical muscle activity on different days.
Hopkins M, Vaidyanathan R, McGregor AH, 2020, Examination of the performance characteristics of velostat as an in-socket pressure sensor, IEEE Sensors Journal, Vol: 20, Pages: 6992-7000, ISSN: 1530-437X
Velostat is a low-cost, low-profile electrical bagging material with piezoresistive properties, making it an attractive option for in-socket pressure sensing. The focus of this research was to explore the suitability of a Velostat-based system for providing real-time socket pressure profiles. The prototype system performance was explored through a series of bench tests to determine properties including accuracy, repeatability and hysteresis responses, and through participant testing with a single subject. The fabricated sensors demonstrated mean accuracy errors of 110 kPa and significant cyclical and thermal drift effects of up to 0.00715 V/cycle and leading to up to a 67% difference in voltage range respectively. Despite these errors the system was able to capture data within a prosthetic socket, aligning to expected contact and loading patterns for the socket and amputation type. Distinct pressure maps were obtained for standing and walking tasks displaying loading patterns indicative of posture and gait phase. The system demonstrated utility for assessing contact and movement patterns within a prosthetic socket, potentially useful for improvement of socket fit, in a low cost, low profile and adaptable format. However, Velostat requires significant improvement in its electrical properties before proving suitable for accurate pressure measurement tools in lower limb prosthetics.
Purnomo D, Richter F, Bonner M, et al., 2020, Role of optimisation method on kinetic inverse modelling of biomass pyrolysis at the microscale, Fuel: the science and technology of fuel and energy, Vol: 262, ISSN: 0016-2361
Biomass pyrolysis is important to biofuel production and fire safety. Inverse modelling is an increasingly used technique to find values for the kinetic parameters that control pyrolysis. The quality of kinetic inverse modelling depends on, in order of importance, the quality of the experimental data, the kinetic model, and the optimisation method used. Unlike the two former components, the optimisation method chosen, i.e. the combination of algorithm and objective function, is rarely discussed in the literature. This work compares the accuracy and efficiency of five commonly used advanced algorithms (Genetic Algorithm, AMALGAM, Shuffled Complex Evolution, Cuckoo Search, and Multi-Start Nonlinear Program) and a simple algorithm (a Random Search) to find the kinetic parameters for cellulose and wood pyrolysis at the microscale. These algorithms are combined with seven objective functions comprising concentrated and dispersed functions. The results show that for cellulose (simple chemistry) the use of an advanced optimisation algorithm is unnecessary, since a simple algorithm achieves similarly high accuracy with higher efficiency. However, for wood (complex chemistry) a combination of an advanced algorithm and a concentrated function greatly improve accuracy. Among the 25 possible combinations we investigated, Shuffled Complex Evolution with mean square error objective function performed best with 0.91% error in mass loss rate and 0.88 × 1013 CPU time. These findings can guide the selection of the best optimisation method to use in inverse modelling of kinetic parameters and ensuring both accuracy and efficiency.
Martineau T, He S, Vaidyanathan R, et al., 2020, Optimizing Time-Frequency Feature Extraction and Channel Selection through Gradient Backpropagation to Improve Action Decoding based on Subthalamic Local Field Potentials, 42nd Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Publisher: IEEE, Pages: 3023-3026, ISSN: 1557-170X
Farzana W, Sarker F, Vaidyanathan R, et al., 2020, Communication Support Utilizing AAC for Verbally Challenged Children in Developing Countries During COVID-19 Pandemic, Pages: 39-50, ISSN: 1865-0929
Functional communication is indispensable for child development at all times but during this COVID-19, non-verbal children become more anxious about social distancing and self-quarantine due to sudden aberration on daily designed practices and professional support. These verbally challenged children require the support of Augmentative and Alternative Communication (AAC) for intercommunication. Therefore, during COVID-19, assistance must be provided remotely to these users by a AAC team involving caregivers, teachers, Speech Language Therapist (SLT) to ensure collaborative learning and development of non-verbal child communication skills. However, most of the advanced AAC, such as Speech Generating Devices (SGD), Picture Exchange Communication System (PECS) based mobile applications (Android & iOS) are designed considering the scenario of developed countries and less accessible in developing countries. Therefore, in this study, we are focusing on representing feasible short term strategies, prospective challenges and as long term strategy, a cloud based framework entitled as “Bolte Chai+”, which is an intelligent integrated collaborative learning platform for non-verbal children, parents, caregivers, teachers and SLT. The intelligent analytics within the platform monitors child overall progress by tracking child activity in mobile application and conversely support parents and AAC team to concentrate on individual child ubiquitous abilities. We believe, the proposed framework and strategies will empower non-verbal children and assist researchers, policy makers to acknowledge a definitive solution to implement AAC as communication support in developing countries during COVID-19 pandemic.
Sajal MSR, Ehsan MT, Vaidyanathan R, et al., 2020, UPDRS Label Assignment by Analyzing Accelerometer Sensor Data Collected from Conventional Smartphones, Pages: 173-182, ISBN: 9783030592769
The study of the characteristics of hand tremors of the patients suffering from Parkinson’s disease (PD) offers an effective way to detect and assess the stage of the disease’s progression. During the semi-quantitative evaluation, neurologists label the PD patients with any of the (0–4) Unified Parkinson’s Diseases Rating Scale (UPDRS) score based on the intensity and prevalence of these tremors. This score can be bolstered by some other modes of assessment as like gait analysis to increase the reliability of PD detection. With the availability of conventional smartphones with a built-in accelerometer sensor, it is possible to acquire the 3-axes tremor and gait data very easily and analyze them by a trained algorithm. Thus we can remotely examine the PD patients from their homes and connect them to trained neurologists if required. The objective of this study was to investigate the usability of smartphones for assessing motor impairments (i.e. tremors and gait) that can be analyzed from accelerometer sensor data. We obtained 98.5% detection accuracy and 91% UPDRS labeling accuracy for 52 PD patients and 20 healthy subjects. The result of this study indicates a great promise for developing a remote system to detect, monitor, and prescribe PD patients over long distances. It will be a tremendous help for the older population in developing countries where access to a trained neurologist is very limited. Also, in a pandemic situation like COVID-19, patients from developed countries can be benefited from such a home-oriented PD detection and monitoring system.
Castillo CSM, Atashzar SF, Vaidyanathan R, 2020, 3D-Mechanomyography: Accessing Deeper Muscle Information Non-Invasively for Human-Machine Interfacing, IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Publisher: IEEE, Pages: 1458-1463, ISSN: 2159-6255
Fadhil A, Kanneganti R, Gupta L, et al., 2019, Fusion of enhanced and synthetic vision system images for runway and horizon detection, Sensors, Vol: 19, Pages: 1-17, ISSN: 1424-8220
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.