152 results found
Russell F, Takeda Y, Kormushev P, et al., 2021, Stiffness modulation in a humanoid robotic leg and knee, IEEE Robotics and Automation Letters, Vol: 6, Pages: 2563-2570, ISSN: 2377-3766
Stiffness modulation in walking is critical to maintain static/dynamic stability as well as minimize energy consumption and impact damage. However, optimal, or even functional, stiffness parameterization remains unresolved in legged robotics.We introduce an architecture for stiffness control utilizing a bioinspired robotic limb consisting of a condylar knee joint and leg with antagonistic actuation. The joint replicates elastic ligaments of the human knee providing tuneable compliance for walking. It further locks out at maximum extension, providing stability when standing. Compliance and friction losses between joint surfaces are derived as a function of ligament stiffness and length. Experimental studies validate utility through quantification of: 1) hip perturbation response; 2) payload capacity; and 3) static stiffness of the leg mechanism.Results prove initiation and compliance at lock out can be modulated independently of friction loss by changing ligament elasticity. Furthermore, increasing co-contraction or decreasing joint angle enables increased leg stiffness, which establishes co-contraction is counterbalanced by decreased payload.Findings have direct application in legged robots and transfemoral prosthetic knees, where biorobotic design could reduce energy expense while improving efficiency and stability. Future targeted impact involves increasing power/weight ratios in walking robots and artificial limbs for increased efficiency and precision in walking control.
Raposo de Lima M, Wairagkar M, Natarajan N, et al., 2021, Robotic telemedicine for mental health: a multimodal approach to improve human-robot engagement, Frontiers in Robotics and AI, Vol: 8, ISSN: 2296-9144
COVID-19 has severely impacted mental health in vulnerable demographics, in particular older adults, who face unprecedented isolation. Consequences, while globally severe, are acutely pronounced in low- and middle-income countries (LMICs) confronting pronounced gaps in resources and clinician accessibility. Social robots are well-recognized for their potential to support mental health, yet user compliance (i.e., trust) demands seamless affective human-robot interactions; natural ‘human-like’ conversations are required in simple, inexpensive, deployable platforms. We present the design, development, and pilot testing of a multimodal robotic framework fusing verbal (contextual speech) and nonverbal (facial expressions) social cues, aimed to improve engagement in human-robot interaction and ultimately facilitate mental health telemedicine during and beyond the COVID-19 pandemic. We report the design optimization of a hybrid face robot, which combines digital facial expressions based on mathematical affect space mapping with static 3D facial features. We further introduce a contextual virtual assistant with integrated cloud-based AI coupled to the robot’s facial representation of emotions, such that the robot adapts its emotional response to users’ speech in real-time. Experiments with healthy participants demonstrate emotion recognition exceeding 90% for happy, tired, sad, angry, surprised and stern/disgusted robotic emotions. When separated, stern and disgusted are occasionally transposed (70%+ accuracy overall) but are easily distinguishable from other emotions. A qualitative user experience analysis indicates overall enthusiastic and engaging reception to human-robot multimodal interaction with the new framework. The robot has been modified to enable clinical telemedicine for cognitive engagement with older adults and people with dementia (PwD) in LMICs. The mechanically simple and low-cost social robot has been deployed in pilot tests to sup
Mancero Castillo C, Wilson S, Vaidyanathan R, et al., 2021, Wearable MMG-plus-one armband: evaluation of normal force on mechanomyography (MMG) to enhance human-machine interfacing, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 29, Pages: 196-205, ISSN: 1534-4320
In this paper, we introduce a new mode of mechanomyography (MMG) signal capture for enhancing the performance of human-machine interfaces (HMIs) through modulation of normal pressure at the sensor location. Utilizing this novel approach, increased MMG signal resolution is enabled by a tunable degree of freedom normal to the sensor-skin contact area. We detail the mechatronic design, experimental validation, and user study of an armband with embedded acoustic sensors demonstrating this capacity. The design is motivated by the nonlinear viscoelasticity of the tissue, which increases with the normal surface pressure. This, in theory, results in higher conductivity of mechanical waves and hypothetically allows to interface with deeper muscle; thus, enhancing the discriminative information context of the signal space. Ten subjects (seven able-bodied and three trans-radial amputees) participated in a study consisting of the classification of hand gestures through MMG while increasing levels of contact force were administered. Four MMG channels were positioned around the forearm and placed over the flexor carpi radialis, brachioradialis, extensor digitorum communis, and flexor carpi ulnaris muscles. A total of 852 spectrotemporal features were extracted (213 features per each channel) and passed through a Neighborhood Component Analysis (NCA) technique to select the most informative neurophysiological subspace of the features for classification. A linear support vector machine (SVM) then classified the intended motion of the user. The results indicate that increasing the normal force level between the MMG sensor and the skin can improve the discriminative power of the classifier, and the corresponding pattern can be user-specific. These results have significant implications enabling embedding MMG sensors in sockets for prosthetic limb control and HMI.
Gardner M, Mancero Castillo C, Wilson S, et al., 2020, A multimodal intention detection sensor suite for shared autonomy of upper-limb robotic prostheses, Sensors, Vol: 20, ISSN: 1424-8220
Neurorobotic augmentation (e.g., robotic assist) is now in regular use to support individuals suffering from impaired motor functions. A major unresolved challenge, however, is the excessive cognitive load necessary for the human–machine interface (HMI). Grasp control remains one of the most challenging HMI tasks, demanding simultaneous, agile, and precise control of multiple degrees-of-freedom (DoFs) while following a specific timing pattern in the joint and human–robot task spaces. Most commercially available systems use either an indirect mode-switching configuration or a limited sequential control strategy, limiting activation to one DoF at a time. To address this challenge, we introduce a shared autonomy framework centred around a low-cost multi-modal sensor suite fusing: (a) mechanomyography (MMG) to estimate the intended muscle activation, (b) camera-based visual information for integrated autonomous object recognition, and (c) inertial measurement to enhance intention prediction based on the grasping trajectory. The complete system predicts user intent for grasp based on measured dynamical features during natural motions. A total of 84 motion features were extracted from the sensor suite, and tests were conducted on 10 able-bodied and 1 amputee participants for grasping common household objects with a robotic hand. Real-time grasp classification accuracy using visual and motion features obtained 100%, 82.5%, and 88.9% across all participants for detecting and executing grasping actions for a bottle, lid, and box, respectively. The proposed multimodal sensor suite is a novel approach for predicting different grasp strategies and automating task performance using a commercial upper-limb prosthetic device. The system also shows potential to improve the usability of modern neurorobotic systems due to the intuitive control design.
Ghosh AK, Burniston SF, Krentzel D, et al., 2020, A novel fetal movement simulator for the performance evaluation of vibration Sensors for wearable fetal movement monitors, Sensors, Vol: 20, ISSN: 1424-8220
Fetal movements (FM) are an important factor in the assessment of fetal health. However, there is currently no reliable way to monitor FM outside clinical environs. While extensive research has been carried out using accelerometer-based systems to monitor FM, the desired accuracy of detection is yet to be achieved. A major challenge has been the difficulty of testing and calibrating sensors at the pre-clinical stage. Little is known about fetal movement features, and clinical trials involving pregnant women can be expensive and ethically stringent. To address these issues, we introduce a novel FM simulator, which can be used to test responses of sensor arrays in a laboratory environment. The design uses a silicon-based membrane with material properties similar to that of a gravid abdomen to mimic the vibrations due to fetal kicks. The simulator incorporates mechanisms to pre-stretch the membrane and to produce kicks similar to that of a fetus. As a case study, we present results from a comparative study of an acoustic sensor, an accelerometer, and a piezoelectric diaphragm as candidate vibration sensors for a wearable FM monitor. We find that the acoustic sensor and the piezoelectric diaphragm are better equipped than the accelerometer to determine durations, intensities, and locations of kicks, as they have a significantly greater response to changes in these conditions than the accelerometer. Additionally, we demonstrate that the acoustic sensor and the piezoelectric diaphragm can detect weaker fetal movements (threshold wall displacements are less than 0.5 mm) compared to the accelerometer (threshold wall displacement is 1.5 mm) with a trade-off of higher power signal artefacts. Finally, we find that the piezoelectric diaphragm produces better signal-to-noise ratios compared to the other two sensors in most of the cases, making it a promising new candidate sensor for wearable FM monitors. We believe that the FM simulator represents a key development towards enabl
Sajal MSR, Ehsan MT, Vaidyanathan R, et al., 2020, Telemonitoring Parkinson's disease using machine learning by combining tremor and voice analysis., Brain Inform, Vol: 7, ISSN: 2198-4018
BACKGROUND: With the growing number of the aged population, the number of Parkinson's disease (PD) affected people is also mounting. Unfortunately, due to insufficient resources and awareness in underdeveloped countries, proper and timely PD detection is highly challenged. Besides, all PD patients' symptoms are neither the same nor they all become pronounced at the same stage of the illness. Therefore, this work aims to combine more than one symptom (rest tremor and voice degradation) by collecting data remotely using smartphones and detect PD with the help of a cloud-based machine learning system for telemonitoring the PD patients in the developing countries. METHOD: This proposed system receives rest tremor and vowel phonation data acquired by smartphones with built-in accelerometer and voice recorder sensors. The data are primarily collected from diagnosed PD patients and healthy people for building and optimizing machine learning models that exhibit higher performance. After that, data from newly suspected PD patients are collected, and the trained algorithms are evaluated to detect PD. Based on the majority-vote from those algorithms, PD-detected patients are connected with a nearby neurologist for consultation. Upon receiving patients' feedback after being diagnosed by the neurologist, the system may update the model by retraining using the latest data. Also, the system requests the detected patients periodically to upload new data to track their disease progress. RESULT: The highest accuracy in PD detection using offline data was [Formula: see text] from voice data and [Formula: see text] from tremor data when used separately. In both cases, k-nearest neighbors (kNN) gave the highest accuracy over support vector machine (SVM) and naive Bayes (NB). The application of maximum relevance minimum redundancy (MRMR) feature selection method showed that by selecting different feature sets based on the patient's gender, we could improve the detection accuracy. This st
Russell F, Kormushev P, Vaidyanathan R, et al., 2020, The impact of ACL laxity on a bicondylar robotic knee and implications in human joint biomechanics, IEEE Transactions on Biomedical Engineering, Vol: 67, Pages: 2817-2827, ISSN: 0018-9294
Objective: Elucidating the role of structural mechanisms in the knee can improve joint surgeries, rehabilitation, and understanding of biped locomotion. Identification of key features, however, is challenging due to limitations in simulation and in-vivo studies. In particular the coupling of the patello-femoral and tibio-femoral joints with ligaments and its impact on joint mechanics and movement is not understood. We investigate this coupling experimentally through the design and testing of a robotic sagittal plane model. Methods: We constructed a sagittal plane robot comprised of: 1) elastic links representing cruciate ligaments; 2) a bi-condylar joint; 3) a patella; and 4) actuator hamstrings and quadriceps. Stiffness and geometry were derived from anthropometric data. 10° - 110° squatting tests were executed at speeds of 0.1 - 0.25Hz over a range of anterior cruciate ligament (ACL) slack lengths. Results: Increasing ACL length compromised joint stability, yet did not impact quadriceps mechanical advantage and force required for squat. The trend was consistent through varying condyle contact point and ligament force changes. Conclusion: The geometry of the condyles allows the ratio of quadriceps to patella tendon force to compensate for contact point changes imparted by the removal of the ACL. Thus the system maintains a constant mechanical advantage. Significance: The investigation uncovers critical features of human knee biomechanics. Findings contribute to understanding of knee ligament damage, inform procedures for knee surgery and orthopaedic implant design, and support design of trans-femoral prosthetics and walking robots. Results further demonstrate the utility of robotics as a powerful means of studying human joint biomechanics.
Masen MA, Chung A, Dawczyk JU, et al., 2020, Evaluating lubricant performance to reduce COVID-19 PPE-related skin injury, PLoS One, Vol: 15, Pages: e0239363-e0239363, ISSN: 1932-6203
BackgroundHealthcare workers around the world are experiencing skin injury due to the extended use of personal protective equipment (PPE) during the COVID-19 pandemic. These injuries are the result of high shear stresses acting on the skin, caused by friction with the PPE. This study aims to provide a practical lubricating solution for frontline medical staff working a 4+ hours shift wearing PPE.MethodsA literature review into skin friction and skin lubrication was conducted to identify products and substances that can reduce friction. We evaluated the lubricating performance of commercially available products in vivo using a custom-built tribometer.FindingsMost lubricants provide a strong initial friction reduction, but only few products provide lubrication that lasts for four hours. The response of skin to friction is a complex interplay between the lubricating properties and durability of the film deposited on the surface and the response of skin to the lubricating substance, which include epidermal absorption, occlusion, and water retention.InterpretationTalcum powder, a petrolatum-lanolin mixture, and a coconut oil-cocoa butter-beeswax mixture showed excellent long-lasting low friction. Moisturising the skin results in excessive friction, and the use of products that are aimed at ‘moisturising without leaving a non-greasy feel’ should be prevented. Most investigated dressings also demonstrate excellent performance.
Rawnaque FS, Rahman KM, Anwar SF, et al., 2020, Technological advancements and opportunities in Neuromarketing: a systematic review, Brain Informatics, Vol: 7, ISSN: 2198-4018
Neuromarketing has become an academic and commercial area of interest, as the advancements in neural recording techniques and interpreting algorithms have made it an effective tool for recognizing the unspoken response of consumers to the marketing stimuli. This article presents the very first systematic review of the technological advancements in Neuromarketing field over the last 5 years. For this purpose, authors have selected and reviewed a total of 57 relevant literatures from valid databases which directly contribute to the Neuromarketing field with basic or empirical research findings. This review finds consumer goods as the prevalent marketing stimuli used in both product and promotion forms in these selected literatures. A trend of analyzing frontal and prefrontal alpha band signals is observed among the consumer emotion recognition-based experiments, which corresponds to frontal alpha asymmetry theory. The use of electroencephalogram (EEG) is found favorable by many researchers over functional magnetic resonance imaging (fMRI) in video advertisement-based Neuromarketing experiments, apparently due to its low cost and high time resolution advantages. Physiological response measuring techniques such as eye tracking, skin conductance recording, heart rate monitoring, and facial mapping have also been found in these empirical studies exclusively or in parallel with brain recordings. Alongside traditional filtering methods, independent component analysis (ICA) was found most commonly in artifact removal from neural signal. In consumer response prediction and classification, Artificial Neural Network (ANN), Support Vector Machine (SVM) and Linear Discriminant Analysis (LDA) have performed with the highest average accuracy among other machine learning algorithms used in these literatures. The authors hope, this review will assist the future researchers with vital information in the field of Neuromarketing for making novel contributions.
Madgwick SOH, Wilson S, Turk R, et al., 2020, An extended complementary filter (ECF) for full-body MARG orientation estimation, IEEE/ASME Transactions on Mechatronics, Vol: 25, Pages: 2054-2064, ISSN: 1083-4435
Inertial sensing suites now permeate all forms of smart automation, yet a plateau exists in real-world derivation of global orientation. Magnetic field fluctuations and inefficient sensor fusion still inhibit deployment. We introduce a new algorithm, an Extended Complementary Filter (ECF), to derive 3D rigid body orientation from inertial sensing suites addressing these challenges. The ECF combines computational efficiency of classic complementary filters with improved accuracy compared to popular optimization filters. We present a complete formulation of the algorithm, including an extension to address the challenge of orientation accuracy in the presence of fluctuating magnetic fields. Performance is tested under a variety of conditions and benchmarked against the commonly used gradient decent (GDA) inertial sensor fusion algorithm. Results demonstrate improved efficiency, with the ECF achieving convergence 30% faster than standard alternatives. We further demonstrate an improved robustness to sources of magnetic interference in pitch and roll and to fast changes of orientation in the yaw direction. The ECF has been implemented at the core of a wearable rehabilitation system tracking movement of stroke patients for home telehealth. The ECF and accompanying magnetic disturbance rejection algorithm enables previously unachievable real-time patient movement feedback in the form of a full virtual human (avatar), even in the presence of magnetic disturbance. Algorithm efficiency and accuracy have also spawned an entire commercial product line released by the company x-io. We believe the ECF and accompanying magnetic disturbance routines are key enablers for future widespread use of wearable systems with the capacity for global orientation tracking
Huo W, Angeles P, Tai YF, et al., 2020, A heterogeneous sensing suite for multisymptom quantification of Parkinson’s disease, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 28, Pages: 1397-1406, ISSN: 1534-4320
Parkinson’s disease (PD) is the second most common neurodegenerative disease affecting millions worldwide. Bespoke subject-specific treatment (medication or deep brain stimulation (DBS)) is critical for management, yet depends on precise assessment cardinal PD symptoms - bradykinesia, rigidity and tremor. Clinician diagnosis is the basis of treatment, yet it allows only a cross-sectional assessment of symptoms which can vary on an hourly basis and is liable to inter- and intra-rater subjectivity across human examiners. Automated symptomatic assessment has attracted significant interest to optimise treatment regimens between clinician visits, however, no wearable has the capacity to simultaneously assess all three cardinal symptoms. Challenges in the measurement of rigidity, mapping muscle activity outof-clinic and sensor fusion have inhibited translation. In this study, we address all through a novel wearable sensor system and learning algorithms. The sensor system is composed of a force-sensor, two inertial measurement units (IMUs) and four custom mechanomyography (MMG) sensors. The system was tested in its capacity to predict Unified Parkinson’s Disease Rating Scale (UPDRS) scores based on quantitative assessment of bradykinesia, rigidity and tremor in PD patients. 23 PD patients were tested with the sensor system in parallel with exams conducted by treating clinicians and 10 healthy subjects were recruited as a comparison control group. Results prove the system accurately predicts UPDRS scores for all symptoms (85.4% match on average with physician assessment) and discriminates between healthy subjects and PD patients (96.6% on average). MMG features can also be used for remote monitoring of severity and fluctuations in PD symptoms out-of-clinic. This closedloop feedback system enables individually tailored and regularly updated treatment, facilitating better outcomes for a very large patient population.
Lai J, Nowlan NC, Vaidyanathan R, et al., 2020, The use of actograph in the assessment of fetal well-being, Journal of Maternal-Fetal and Neonatal Medicine, Vol: 33, Pages: 2116-2121, ISSN: 1476-4954
PURPOSE: Third trimester maternal perception of fetal movements is often used to assess fetal well-being. However, its true clinical value is unknown, primarily because of the variability in subjective quantification. The actograph, a technology available on most cardiotocograph machines, quantifies movements, but has never previously been investigated in relation to fetal health and existing monitoring devices. The objective of this study was to quantify actograph output in healthy third trimester pregnancies and investigate this in relation to other methods of assessing fetal well-being. METHODS: Forty-two women between 24 and 34 weeks of gestation underwent ultrasound scan followed by a computerized cardiotocograph (CTG). Post capture analysis of the actograph recording was performed and expressed as a percentage of activity over time. The actograph output results were analyzed in relation to Doppler, ultrasound and CTG findings expressed as z-score normalized for gestation. RESULTS: There was a significant association between actograph output recording and estimated fetal weight Z-score (R = 0.546, p ≤ .005). This activity was not related to estimated fetal weight. Increased actograph activity was negatively correlated with umbilical artery pulsatility index Z-score (R = -0.306, p = .049) and middle cerebral artery pulsatility index Z-score (R = -0.390, p = .011). CONCLUSION: Fetal movements assessed by the actograph are associated both with fetal size in relation to gestation and fetoplacental Doppler parameters. It is not the case that larger babies move more, however, as the relationship with actograph output related only to estimated fetal weight z-score. These findings suggest a plausible link between the frequency of fetal movements and established markers of fetal health. RATIONALE The objective of this study was to quantify actograph output in healthy third trimester pregnancies and investigate this in relation to other methods of assess
Meagher C, Franco E, Turk R, et al., 2020, New advances in mechanomyography sensor technology and signal processing: validity and intrarater reliability of recordings from muscle, Journal of Rehabilitation and Assistive Technologies Engineering, Vol: 7, ISSN: 2055-6683
IntroductionThe Mechanical Muscle Activity with Real-time Kinematics project aims to develop a device incorporating wearable sensors for arm rehabilitation following stroke. These will record kinematic activity using inertial measurement units and mechanical muscle activity. The gold standard for measuring muscle activity is electromyography; however, mechanomyography offers an appropriate alterative for our home-based rehabilitation device. We have patent filed a new laboratory-tested device that combines an inertial measurement unit with mechanomyography. We report on the validity and reliability of the mechanomyography against electromyography sensors.MethodsIn 18 healthy adults (27–82 years), mechanomyography and electromyography recordings were taken from the forearm flexor and extensor muscles during voluntary contractions. Isometric contractions were performed at different percentages of maximal force to examine the validity of mechanomyography. Root-mean-square of mechanomyography and electromyography was measured during 1 s epocs of isometric flexion and extension. Dynamic contractions were recorded during a tracking task on two days, one week apart, to examine reliability of muscle onset timing.ResultsReliability of mechanomyography onset was high (intraclass correlation coefficient = 0.78) and was comparable with electromyography (intraclass correlation coefficient = 0.79). The correlation between force and mechanomyography was high (R2 = 0.94).ConclusionThe mechanomyography device records valid and reliable signals of mechanical muscle activity on different days.
Hopkins M, Vaidyanathan R, McGregor AH, 2020, Examination of the performance characteristics of velostat as an in-socket pressure sensor, IEEE Sensors Journal, Vol: 20, Pages: 6992-7000, ISSN: 1530-437X
Velostat is a low-cost, low-profile electrical bagging material with piezoresistive properties, making it an attractive option for in-socket pressure sensing. The focus of this research was to explore the suitability of a Velostat-based system for providing real-time socket pressure profiles. The prototype system performance was explored through a series of bench tests to determine properties including accuracy, repeatability and hysteresis responses, and through participant testing with a single subject. The fabricated sensors demonstrated mean accuracy errors of 110 kPa and significant cyclical and thermal drift effects of up to 0.00715 V/cycle and leading to up to a 67% difference in voltage range respectively. Despite these errors the system was able to capture data within a prosthetic socket, aligning to expected contact and loading patterns for the socket and amputation type. Distinct pressure maps were obtained for standing and walking tasks displaying loading patterns indicative of posture and gait phase. The system demonstrated utility for assessing contact and movement patterns within a prosthetic socket, potentially useful for improvement of socket fit, in a low cost, low profile and adaptable format. However, Velostat requires significant improvement in its electrical properties before proving suitable for accurate pressure measurement tools in lower limb prosthetics.
Purnomo D, Richter F, Bonner M, et al., 2020, Role of optimisation method on kinetic inverse modelling of biomass pyrolysis at the microscale, Fuel: the science and technology of fuel and energy, Vol: 262, ISSN: 0016-2361
Biomass pyrolysis is important to biofuel production and fire safety. Inverse modelling is an increasingly used technique to find values for the kinetic parameters that control pyrolysis. The quality of kinetic inverse modelling depends on, in order of importance, the quality of the experimental data, the kinetic model, and the optimisation method used. Unlike the two former components, the optimisation method chosen, i.e. the combination of algorithm and objective function, is rarely discussed in the literature. This work compares the accuracy and efficiency of five commonly used advanced algorithms (Genetic Algorithm, AMALGAM, Shuffled Complex Evolution, Cuckoo Search, and Multi-Start Nonlinear Program) and a simple algorithm (a Random Search) to find the kinetic parameters for cellulose and wood pyrolysis at the microscale. These algorithms are combined with seven objective functions comprising concentrated and dispersed functions. The results show that for cellulose (simple chemistry) the use of an advanced optimisation algorithm is unnecessary, since a simple algorithm achieves similarly high accuracy with higher efficiency. However, for wood (complex chemistry) a combination of an advanced algorithm and a concentrated function greatly improve accuracy. Among the 25 possible combinations we investigated, Shuffled Complex Evolution with mean square error objective function performed best with 0.91% error in mass loss rate and 0.88 × 1013 CPU time. These findings can guide the selection of the best optimisation method to use in inverse modelling of kinetic parameters and ensuring both accuracy and efficiency.
Sajal MSR, Ehsan MT, Vaidyanathan R, et al., 2020, UPDRS Label Assignment by Analyzing Accelerometer Sensor Data Collected from Conventional Smartphones, Pages: 173-182, ISBN: 9783030592769
The study of the characteristics of hand tremors of the patients suffering from Parkinson’s disease (PD) offers an effective way to detect and assess the stage of the disease’s progression. During the semi-quantitative evaluation, neurologists label the PD patients with any of the (0–4) Unified Parkinson’s Diseases Rating Scale (UPDRS) score based on the intensity and prevalence of these tremors. This score can be bolstered by some other modes of assessment as like gait analysis to increase the reliability of PD detection. With the availability of conventional smartphones with a built-in accelerometer sensor, it is possible to acquire the 3-axes tremor and gait data very easily and analyze them by a trained algorithm. Thus we can remotely examine the PD patients from their homes and connect them to trained neurologists if required. The objective of this study was to investigate the usability of smartphones for assessing motor impairments (i.e. tremors and gait) that can be analyzed from accelerometer sensor data. We obtained 98.5% detection accuracy and 91% UPDRS labeling accuracy for 52 PD patients and 20 healthy subjects. The result of this study indicates a great promise for developing a remote system to detect, monitor, and prescribe PD patients over long distances. It will be a tremendous help for the older population in developing countries where access to a trained neurologist is very limited. Also, in a pandemic situation like COVID-19, patients from developed countries can be benefited from such a home-oriented PD detection and monitoring system.
Castillo CSM, Atashzar SF, Vaidyanathan R, 2020, 3D-Mechanomyography: Accessing Deeper Muscle Information Non-Invasively for Human-Machine Interfacing, IEEE/ASME International Conference on Advanced Intelligent Mechatronics (AIM), Publisher: IEEE, Pages: 1458-1463, ISSN: 2159-6255
Farzana W, Sarker F, Vaidyanathan R, et al., 2020, Communication Support Utilizing AAC for Verbally Challenged Children in Developing Countries During COVID-19 Pandemic, Pages: 39-50, ISSN: 1865-0929
Functional communication is indispensable for child development at all times but during this COVID-19, non-verbal children become more anxious about social distancing and self-quarantine due to sudden aberration on daily designed practices and professional support. These verbally challenged children require the support of Augmentative and Alternative Communication (AAC) for intercommunication. Therefore, during COVID-19, assistance must be provided remotely to these users by a AAC team involving caregivers, teachers, Speech Language Therapist (SLT) to ensure collaborative learning and development of non-verbal child communication skills. However, most of the advanced AAC, such as Speech Generating Devices (SGD), Picture Exchange Communication System (PECS) based mobile applications (Android & iOS) are designed considering the scenario of developed countries and less accessible in developing countries. Therefore, in this study, we are focusing on representing feasible short term strategies, prospective challenges and as long term strategy, a cloud based framework entitled as “Bolte Chai+”, which is an intelligent integrated collaborative learning platform for non-verbal children, parents, caregivers, teachers and SLT. The intelligent analytics within the platform monitors child overall progress by tracking child activity in mobile application and conversely support parents and AAC team to concentrate on individual child ubiquitous abilities. We believe, the proposed framework and strategies will empower non-verbal children and assist researchers, policy makers to acknowledge a definitive solution to implement AAC as communication support in developing countries during COVID-19 pandemic.
Martineau T, He S, Vaidyanathan R, et al., 2020, Optimizing Time-Frequency Feature Extraction and Channel Selection through Gradient Backpropagation to Improve Action Decoding based on Subthalamic Local Field Potentials, 42nd Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Publisher: IEEE, Pages: 3023-3026, ISSN: 1557-170X
Fadhil A, Kanneganti R, Gupta L, et al., 2019, Fusion of enhanced and synthetic vision system images for runway and horizon detection, Sensors, Vol: 19, Pages: 1-17, ISSN: 1424-8220
Networked operation of unmanned air vehicles (UAVs) demands fusion of information from disparate sources for accurate flight control. In this investigation, a novel sensor fusion architecture for detecting aircraft runway and horizons as well as enhancing the awareness of surrounding terrain is introduced based on fusion of enhanced vision system (EVS) and synthetic vision system (SVS) images. EVS and SVS image fusion has yet to be implemented in real-world situations due to signal misalignment. We address this through a registration step to align EVS and SVS images. Four fusion rules combining discrete wavelet transform (DWT) sub-bands are formulated, implemented, and evaluated. The resulting procedure is tested on real EVS-SVS image pairs and pairs containing simulated turbulence. Evaluations reveal that runways and horizons can be detected accurately even in poor visibility. Furthermore, it is demonstrated that different aspects of EVS and SVS images can be emphasized by using different DWT fusion rules. The procedure is autonomous throughout landing, irrespective of weather. The fusion architecture developed in this study holds promise for incorporation into manned heads-up displays (HUDs) and UAV remote displays to assist pilots landing aircraft in poor lighting and varying weather. The algorithm also provides a basis for rule selection in other signal fusion applications.
Wilson S, Eberle H, Hayashi Y, et al., 2019, Formulation of a new gradient descent MARG orientation algorithm: case study on robot teleoperation, Mechanical Systems and Signal Processing, Vol: 130, Pages: 183-200, ISSN: 0888-3270
We introduce a novel magnetic angular rate gravity (MARG) sensor fusion algorithm for inertial measurement. The new algorithm improves the popular gradient descent (ʻMadgwick’) algorithm increasing accuracy and robustness while preserving computational efficiency. Analytic and experimental results demonstrate faster convergence for multiple variations of the algorithm through changing magnetic inclination. Furthermore, decoupling of magnetic field variance from roll and pitch estimation is proven for enhanced robustness. The algorithm is validated in a human-machine interface (HMI) case study. The case study involves hardware implementation for wearable robot teleoperation in both Virtual Reality (VR) and in real-time on a 14 degree-of-freedom (DoF) humanoid robot. The experiment fuses inertial (movement) and mechanomyography (MMG) muscle sensing to control robot arm movement and grasp simultaneously, demonstrating algorithm efficacy and capacity to interface with other physiological sensors. To our knowledge, this is the first such formulation and the first fusion of inertial measurement and MMG in HMI. We believe the new algorithm holds the potential to impact a very wide range of inertial measurement applications where full orientation necessary. Physiological sensor synthesis and hardware interface further provides a foundation for robotic teleoperation systems with necessary robustness for use in the field.
Natarajan N, Vaitheswaran S, De Lima MR, et al., 2019, P4-630: USE OF A HYBRID FACE ROBOT IN DEMENTIA CARE: UNDERSTANDING FEASIBILITY IN INDIA, Alzheimer's & Dementia, Vol: 15, Pages: P1569-P1569, ISSN: 1552-5260
Woodward R, Stokes M, Shefelbine S, et al., 2019, Segmenting mechanomyography measures of muscle activity phases using inertial data, Scientific Reports, Vol: 9, ISSN: 2045-2322
Electromyography (EMG) is the standard technology for monitoring muscle activity in laboratory environments, either using surface electrodes or fine wire electrodes inserted into the muscle. Due to limitations such as cost, complexity, and technical factors, including skin impedance with surface EMG and the invasive nature of fine wire electrodes, EMG is impractical for use outside of a laboratory environment. Mechanomyography (MMG) is an alternative to EMG, which shows promise in pervasive applications. The present study used an exerting squat-based task to induce muscle fatigue. MMG and EMG amplitude and frequency were compared before, during, and after the squatting task. Combining MMG with inertial measurement unit (IMU) data enabled segmentation of muscle activity at specific points: entering, holding, and exiting the squat. Results show MMG measures of muscle activity were similar to EMG in timing, duration, and magnitude during the fatigue task. The size, cost, unobtrusive nature, and usability of the MMG/IMU technology used, paired with the similar results compared to EMG, suggest that such a system could be suitable in uncontrolled natural environments such as within the home.
Formstone L, Pucek M, Wilson S, et al., 2019, Myographic Information Enables Hand Function Classification in Automated Fugl-Meyer Assessment, 9th IEEE/EMBS International Conference on Neural Engineering (NER), Publisher: IEEE, Pages: 239-242, ISSN: 1948-3546
Harkin P, Vaidyanathan R, Morad S, 2019, Concentric joint connectors for form-changing space frames, 7th International Conference on Structural Engineering, Mechanics and Computation (SEMC), Publisher: CRC PRESS-BALKEMA, Pages: 977-982
Russell F, Vaidyanathan R, Ellison P, 2018, A kinematic model for the design of a bicondylar mechanical knee, 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Publisher: IEEE, Pages: 750-755
In this paper we present a design methodology for a bicondylar joint that mimics many of the physical mechanisms in the human knee. We replicate the elastic ligaments and sliding and rolling joint surfaces. As a result the centre of rotation and moment arm from the quadriceps changes as a function of flexion angle in a similar way to the human knee. This leads to a larger moment arm in the centre of motion, where it is most needed for high load tasks, and a smaller moment arm at the extremes, reducing the required actuator displacement. This is anticipated to improve performance:weight ratio in legged devices for tasks such as stair accent and sit-to-stand. In the design process ligament attachment positions, femur profile and ligament lengths were taken from cadaver studies. This information was then used as inputs to a simplified kinematic computer model in order to design a valid profile for a tibial condyle. A physical model was then tested on a custom built squatting robot. It was found that although ligament lengths deviated from the designed values the robot moment arm still matched the model to within 6.1% on average. This shows that the simplified model is an effective design tool for this type of joint. It is anticipated that this design, when employed in walking robots, prostheses or exoskeletons, will improve the high load task capability of these devices. In this paper we have outlined and validated a design method to begin to achieve this goal.
Caulcrick C, Russell F, Wilson S, et al., 2018, Unilateral Inertial and Muscle Activity Sensor Fusion for Gait Cycle Progress Estimation, Pages: 1151-1156, ISSN: 2155-1774
This paper introduces a method which uses feedforward neural networks (FNNs) for estimating gait cycle progress using data recorded from inertial and muscle activity sensors attached to one side of the lower body. Three-axis inertial measurement unit (IMU) readings from accelerometers and gyroscopes located above the outer ankle and knee were fused with mechanomyogram (MMG) sensor readings from across major muscle groups on the left leg. Validation was against ground truth gathered concurrently with VICON motion capture. The performance was characterised by rms error (Erms) and max error (Emax), averaged across four cross-validated trials, and enhanced by adjusting number of sliding window frames and hidden layer neurons. The final configuration estimated gait cycle progress with Erms of 1.6% and Emax of 6.8%. This demonstrates promise for such a method to be used for control of unilateral robotic prostheses and exoskeletons, providing state estimation of gait progress from low power sensors limited to one side of the lower body.
Needham APH, Paszkiewicz FP, Alias MFM, et al., 2018, Subject-independent data pooling in classification of gait intent using mechanomyography on a transtibial amputee, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE COMPUTER SOC, Pages: 1806-1811, ISSN: 1050-4729
In this paper we present a new bioinspired bicondylar knee joint that requires a smaller actuator size when compared to a constant moment arm joint. Unlike existing prosthetic joints, the proposed mechanism replicates the elastic, rolling and sliding elements of the human knee. As a result, the moment arm that the actuators can impart on the joint changes as function of the angle, producing the equivalent of a variable transmission. By employing a similar moment arm—angle profile as the human knee the peak actuator force for stair ascent can be reduced by 12% compared to a constant moment arm joint addressing critical impediments in weight and power for robotics limbs. Additionally, the knee employs mechanical 'ligaments' containing stretch sensors to replicate the neurosensory and compliant elements of the joint. We demonstrate experimentally how the ligament stretch can be used to estimate joint angle, therefore overcoming the difficulty of sensing position in a bicondylar joint.
Lai J, Woodward R, Alexandrov Y, et al., 2018, Performance of a wearable acoustic system for fetal movement discrimination, PLoS One, Vol: 13, Pages: 1-14, ISSN: 1932-6203
Fetal movements (FM) are a key factor in clinical management of high-risk pregnancies such as fetal growth restriction. While maternal perception of reduced FM can trigger self-referral to obstetric services, maternal sensation is highly subjective. Objective, reliable monitoring of fetal movement patterns outside clinical environs is not currently possible. A wearable and non-transmitting system capable of sensing fetal movements over extended periods of time would be extremely valuable, not only for monitoring individual fetal health, but also for establishing normal levels of movement in the population at large. Wearable monitors based on accelerometers have previously been proposed as a means of tracking FM, but such systems have difficulty separating maternal and fetal activity and have not matured to the level of clinical use. We introduce a new wearable system based on a novel combination of accelerometers and bespoke acoustic sensors as well as an advanced signal processing architecture to identify and discriminate between types of fetal movements. We validate the system with concurrent ultrasound tests on a cohort of 44 pregnant women and demonstrate that the garment is capable of both detecting and discriminating the vigorous, whole-body ‘startle’ movements of a fetus. These results demonstrate the promise of multimodal sensing for the development of a low-cost, non-transmitting wearable monitor for fetal movements.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.