276 results found
Dahiya R, Yogeswaran N, Liu F, et al., 2019, Large-Area Soft e-Skin: The Challenges Beyond Sensor Designs, PROCEEDINGS OF THE IEEE, Vol: 107, Pages: 2016-2033, ISSN: 0018-9219
Huang Y, Burdet E, Cao L, et al., 2019, Performance evaluation of a foot interface to operate a robot arm, IEEE Robotics and Automation Letters, Vol: 4, Pages: 3302-3309, ISSN: 2377-3766
We developed a foot interface enabling an operator to control a robotic arm with four degrees of freedom in continuous direction and speed, for operating one of the multiple tools required during robot-aided surgery. In this letter, we first test whether this pedal interface can be used to carry out complex manipulation as is required in surgery. Second, we compare the performance of ten naive operators using this new interface and a traditional button interface providing axis-by-axis constant-speed control. Testing is carried out on geometrically complex path-following tasks similar to laparoscopic training. Movement precision, time and smoothness are analyzed. The results demonstrate that the continuous pedal interface can be used to control a robot in complex motion tasks. The subjects kept the average error rate at a low level of around 2.6% with both interfaces, but the pedal interface resulted in about 30% faster operation and 60% smoother movement, which indicates improved efficiency and user experience as compared with the button interface. A questionnaire shows that controlling the robot with the pedal interface was more intuitive, comfortable, and less tiring than with the button interface.
Arami A, Poulakakis-Daktylidis A, Tai YF, et al., 2019, Prediction of gait freezing in Parkinsonian patients: a binary classification augmented with time series prediction, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 27, Pages: 1909-1919, ISSN: 1534-4320
This paper presents a novel technique to predict freezing of gait in advance-stage Parkinsonian patients using movement data from wearable sensors. A two-class approach is presented which consists of autoregressive predictive models to project the feature time series, followed by machine learning based classifiers to discriminate freezing from nonfreezing based on the predicted features. To implement and validate our technique a set of time domain and frequency domain features were extracted from the 3D acceleration data, which was then analyzed using information theoretic and feature selection approaches to determine the most discriminative features. Predictive models were trained to predict the features from their past values, then fed into binary classifiers based on support vector machines and probabilistic neural networks which were rigorously cross validated. We compared the results of this approach with a three-class classification approach proposed in previous literature, in which a pre-freezing class was introduced and the problem of prediction of the gait freezing incident was reduced to solving a three-class classification problem. The twoclass approach resulted in a sensitivity of 93±4%, specificity of 91±6%, with an expected prediction horizon of 1.72 seconds. Our subject-specific gait freezing prediction algorithm outperformed existing algorithms, yields consistent results across different subjects and is robust against the choice of classifier, with slight variations in the selected features. In addition, we analyzed the merits and limitations of different families of features to predict gait freezing.
Perez NP, Tokarchuk L, Burdet E, et al., 2019, Exploring user motor behaviour in bimanual interactive video games, ISSN: 2325-4270
© 2019 IEEE. Video games have proved very valuable in rehabilitation technologies. They guide therapy and keep patients engaged and motivated. However, in order to realize their full potential, a good understanding is required of the players' motor control. In particular, little is known regarding player behaviour in tasks demanding bimanual interaction. In this work, an experiment was designed to improve the understanding of such tasks. A driving game was developed in which players were asked to guide a differential wheeled robot (depicted as a rocket) along a trajectory. The rocket could be manipulated by using an Xbox controller's triggers, each supplying torque to the corresponding side of the robot. Such a task is redundant, i.e. there exists an infinite number of input combinations to yield a given outcome. This allows the player to strategize according to their own preference. 10 participants were recruited to play this game and their input data was logged for subsequent analysis. Two different motor strategies were identified: an "intermittent" input pattern versus a "continuous" one. It is hypothesized that the choice of behaviour depends on motor skill and minimization of effort and error. Further testing is necessary to determine the exact relationship between these aspects.
Lotay R, Mace M, Rinne P, et al., 2019, optimizing self-exercise scheduling in motor stroke using Challenge Point Framework theory, 2019 IEEE 16th International Conference on Rehabilitation Robotics (ICORR), Publisher: IEEE, Pages: 435-440
An important challenge for technology-assisted self-led rehabilitation is how to automate appropriate schedules of exercise that are responsive to patients’ needs, and optimal for learning. While random scheduling has been found to be superior for long-term learning relative to fixed scheduling (Contextual Interference), this method is limited by not adequately accounting for task difficulty, or skill acquisition during training. One method that combines contextual interference with adaptation of the challenge to the skill-level of the player is Challenge Point Framework (CPF) theory. In this pilot study we test whether self-led motor training based upon CPF scheduling achieves faster learning than deterministic, fixed scheduling. Training was implemented in a mobile gaming device adapted for arm disability, allowing for grip and wrist exercises. We tested 11 healthy volunteers and 12 hemiplegic stroke patients in a single-blinded no crossover controlled randomized trial. Results suggest that patients training with CPF-based adaption performed better than those training with fixed conditions. This was not seen for healthy volunteers whose performance was close to ceiling. Further data collection is required to determine the significance of the results.
Mehring C, Akselrod M, Bashford L, et al., 2019, Augmented manipulation ability in humans with six-fingered hands, Nature Communications, Vol: 10, Pages: 2401-2401, ISSN: 2041-1723
Neurotechnology attempts to develop supernumerary limbs, but can the human brain deal with the complexity to control an extra limb and yield advantages from it? Here, we analyzed the neuromechanics and manipulation abilities of two polydactyly subjects who each possess six fingers on their hands. Anatomical MRI of the supernumerary finger (SF) revealed that it is actuated by extra muscles and nerves, and fMRI identified a distinct cortical representation of the SF. In both subjects, the SF was able to move independently from the other fingers. Polydactyly subjects were able to coordinate the SF with their other fingers for more complex movements than five fingered subjects, and so carry out with only one hand tasks normally requiring two hands. These results demonstrate that a body with significantly more degrees-of-freedom can be controlled by the human nervous system without causing motor deficits or impairments and can instead provide superior manipulation abilities.
Mutalib SA, Mace M, Ong HT, et al., 2019, Influence of visual-coupling on bimanual coordination in unilateral spastic cerebral palsy., IEEE Int Conf Rehabil Robot, Vol: 2019, Pages: 1013-1018
Controlling two objects simultaneously during a bimanual task is a cognitively demanding process; both hands need to be temporally and spatially coordinated to achieve the shared task goal. Children with unilateral spastic cerebral palsy (USCP) exhibit severe sensory and motor impairments to one side of their body that make the process of coordinating bimanual movements particularly exhausting. Prior studies have shown that performing visually-coupled task could reduce cognitive interference associated with performing 'two tasks at once' in an uncoupled bimanual task. For children with USCP, who also present with cognitive delay, performing this type of task may allow them to process and plan their movement faster. We tested this hypothesis by examining the grip force control of 7 children with USCP during unimanual and visually-coupled bimanual tasks. Results demonstrated that despite the visual coupling, the bimanual coordination of these children remained impaired. However, there may be a potential benefit of visually-coupled task in encouraging both hands to initiate in concert. The implication of the study for children with USCP is discussed.
Kager S, Hussain A, Cherpin A, et al., 2019, The effect of skill level matching in dyadic interaction on learning of a tracing task., IEEE Int Conf Rehabil Robot, Vol: 2019, Pages: 824-829
Dyadic interaction between humans has gained great research interest in the last years. The effects of factors that influence the interaction, as e.g. roles or skill level matching, are still not well understood. In this paper, we further investigated the effect of skill level matching between partners on learning of a visuo-motor task. Understanding the effect of skill level matching is crucial for applications in collaborative rehabilitation. Fifteen healthy participants were asked to trace a path while being subjected to a visuo-motor rotation (Novice). The Novices were paired with a partner, forming one of the three Dyad Types: a) haptic connection to another Novice, b) haptic connection to an Expert (no visuo-motor rotation), or c) no haptic. The intervention consisted of a Familiarization phase, followed by a Training phase, in which the Novices were learning the task in the respective Dyad Type, and a Test phase in which the learning was assessed (haptic connection removed, if any). Results suggest that learning of the task with a haptic connection to an Expert was least beneficial. However, during the Training phase the dyads comprising an Expert clearly outperformed the dyads with matched skill levels. The results point towards the same direction as previous findings in literature and can be explained by current motor-learning theories. Future work needs to corroborate these preliminary results.
Farkhatdinov I, Ebert J, van Oort G, et al., 2019, Assisting human balance in standing with a robotic exoskeleton, IEEE Robotics and Automation Letters, Vol: 4, Pages: 414-421, ISSN: 2377-3766
This letter presents an experimental study on balance recovery control with a lower limb exoskeleton robot. Four participants were subjected to a perturbation during standing, a forward force impulse applied to their pelvis that forced them to step forward with the right leg for balance recovery. Trials with and without exoskeleton assistance to move the stepping legs thigh were conducted to investigate the influence of the exoskeletons control assistance on balancing performance and a potential adaptation. Analysis of the body kinematics and muscle activation demonstrates that robotic assistance: first, was easy to use and did not require learning, nor inhibited the healthy stepping behavior; second, it modified the stepping leg trajectories by increasing hip and knee movement; third, increased reaction speed and decreased the step duration; and finally, generally increased biceps femoris and rectus femoris muscle activity.
Takagi A, Hiroshima M, Nozaki D, et al., 2019, Individuals physically interacting in a group rapidly coordinate their movement by estimating the collective goal, eLife, Vol: 8, ISSN: 2050-084X
How can a human collective coordinate, for example to move a banquet table, wheneach person is influenced by the inertia of others who may be inferior at the task? We hypothesizedthat large groups cannot coordinate through touch alone, accruing to a zero-sum scenario whereindividuals inferior at the task hinder superior ones. We tested this hypothesis by examining howdyads, triads and tetrads, whose right hands were physically coupled together, followed a commonmoving target. Surprisingly, superior individuals followed the target accurately even when coupledto an inferior group, and the interaction benefits increased with the group size. A computationalmodel shows that these benefits arose as each individual uses their respective interaction force toinfer the collective’s target and enhance their movement planning, which permitted coordination inseconds independent of the collective’s size. By estimating the collective’s movement goal, itsindividuals make physical interaction beneficial, swift and scalable.
Li Y, Carboni G, Gonzalez F, et al., 2019, Differential game theory for versatile physical human–robot interaction, Nature Machine Intelligence, Vol: 1, Pages: 36-43, ISSN: 2522-5839
The last decades have seen a surge of robots working in contact with humans. However, until now these contact robots have made little use of the opportunities offered by physical interaction and lack a systematic methodology to produce versatile behaviours. Here, we develop an interactive robot controller able to understand the control strategy of the human user and react optimally to their movements. We demonstrate that combining an observer with a differential game theory controller can induce a stable interaction between the two partners, precisely identify each other’s control law, and allow them to successfully perform the task with minimum effort. Simulations and experiments with human subjects demonstrate these properties and illustrate how this controller can induce different representative interaction strategies.
Mutalib SA, Mace M, Burdet E, 2019, Bimanual coordination during a physically coupled task in unilateral spastic cerebral palsy children, Journal of NeuroEngineering and Rehabilitation, Vol: 16, ISSN: 1743-0003
BackgroundSingle object bimanual manipulation, or physically-coupled bimanual tasks, are ubiquitous in daily lives. However, the predominant focus of previous studies has been on uncoupled bimanual actions, where the two hands act independently to manipulate two disconnected objects. In this paper, we explore interlimb coordination among children with unilateral spastic cerebral palsy (USCP), by investigating upper limb motor control during a single object bimanual lifting task.Methods15 children with USCP and 17 typically developing (TD) children performed a simple single-object bimanual lifting task. The object was an instrumented cube that can record the contact force on each of its faces alongside estimating its trajectory during a prescribed two-handed lifting motion. The subject’s performance was measured in terms of the duration of individual phases, linearity and monotonicity of the grasp-to-load force synergy, interlimb force asymmetry, and movement smoothness.ResultsSimilar to their TD counterparts, USCP subjects were able to produce a linear grasp-to-load force synergy. However, they demonstrated difficulties in producing monotonic forces and generating smooth movements. No impairment of anticipatory control was observed within the USCP subjects. However, our analysis showed that the USCP subjects shifted the weight of the cube onto their more-abled side, potentially to minimise the load on the impaired side, which suggests a developed strategy of compensating for inter-limb asymmetries, such as muscle strength.ConclusionBimanual interaction with a single mutual object has the potential to facilitate anticipation and sequencing of force control in USCP children unlike previous studies which showed deficits during uncoupled bimanual actions. We suggest that this difference could be partly due to the provision of adequate cutaneous and kinaesthetic information gathered from the dynamic exchange of forces between the two hands, mediated through the phy
van der Kooij H, van Asseldonk E, van Oort G, et al., 2019, Symbitron: Symbiotic man-machine interactions in wearable exoskeletons to enhance mobility for paraplegics, Biosystems and Biorobotics, Pages: 361-364
© Springer Nature Switzerland AG 2019. The main goal of the Symbitron project was to develop a safe, bio-inspired, personalized wearable exoskeleton that enables SCI patients to walk without additional assistance, by complementing their remaining motor function. Here we give an overview of major achievements of the projects.
Abdi E, Bouri M, Burdet E, et al., 2018, Development and Comparison of Foot Interfaces for Controlling a Robotic Arm in Surgery, IEEE International Conference on Robotics and Biomimetics (ROBIO), Publisher: IEEE, Pages: 414-420
Balasubramanian S, Garcia-Cossio E, Birbaumer N, et al., 2018, Is EMG a viable alternative to BCI for detecting movement intention in severe stroke?, IEEE Transactions on Biomedical Engineering, Vol: 65, Pages: 2790-2797, ISSN: 0018-9294
Objective: In light of the shortcomings of current restorative brain-computer interfaces (BCI), this study investigated the possibility of using EMG to detect hand/wrist extension movement intention to trigger robot-assisted training in individuals without residual movements. Methods: We compared movement intention detection using an EMG detector with a sensorimotor rhythm based EEG-BCI using only ipsilesional activity. This was carried out on data of 30 severely affected chronic stroke patients from a randomized control trial using an EEG-BCI for robot-assisted training. Results: The results indicate the feasibility of using EMG to detect movement intention in this severely handicapped population; probability of detecting EMG when patients attempted to move was higher (p <; 0.001) than at rest. Interestingly, 22 out of 30 (or 73%) patients had sufficiently strong EMG in their finger/wrist extensors. Furthermore, in patients with detectable EMG, there was poor agreement between the EEG and EMG intent detectors, which indicates that these modalities may detect different processes. Conclusion : A substantial segment of severely affected stroke patients may benefit from EMG-based assisted therapy. When compared to EEG, a surface EMG interface requires less preparation time, which is easier to don/doff, and is more compact in size. Significance: This study shows that a large proportion of severely affected stroke patients have residual EMG, which yields a direct and practical way to trigger robot-assisted training.
Donadio A, Whitehead K, Gonzalez F, et al., 2018, A novel sensor design for accurate measurement of facial somatosensation in pre-term infants, PLoS ONE, Vol: 13, ISSN: 1932-6203
Facial somatosensory feedback is critical for breastfeeding in the first days of life. However, its development has never been investigated in humans. Here we develop a new interface to measure facial somatosensation in newborn infants. The novel system allows to measure neuronal responses to touching the face of the subject by synchronously recording scalp electroencephalography (EEG) and the force applied by the experimenter. This is based on a dedicated force transducer that can be worn on the finger underneath a clinical nitrile glove and linked to a commercial EEG acquisition system. The calibrated device measures the pressure applied by the investigator when tapping the skin concurrently with the resulting brain response. With this system, we were able to demonstrate that taps of 192 mN (mean) reliably elicited facial somatosensory responses in 7 pre-term infants. These responses had a time course similar to those following limbs stimulation, but more lateral topographical distribution consistent with body representations in primary somatosensory areas. The method introduced can therefore be used to reliably measure facial somatosensory responses in vulnerable infants.
Borzelli D, Cesqui B, Berger DJ, et al., 2018, Muscle patterns underlying voluntary modulation of co-contraction, PLoS ONE, Vol: 13, ISSN: 1932-6203
Manipulative actions involving unstable interactions with the environment require controlling mechanical impedance through muscle co-contraction. While much research has focused on how the central nervous system (CNS) selects the muscle patterns underlying a desired movement or end-point force, the coordination strategies used to achieve a desired end-point impedance have received considerably less attention. We recorded isometric forces at the hand and electromyographic (EMG) signals in subjects performing a reaching task with an external disturbance. In a virtual environment, subjects displaced a cursor by applying isometric forces and were instructed to reach targets in 20 spatial locations. The motion of the cursor was then perturbed by disturbances whose effects could be attenuated by increasing co-contraction. All subjects could voluntarily modulate co-contraction when disturbances of different magnitudes were applied. For most muscles, activation was modulated by target direction according to a cosine tuning function with an offset and an amplitude increasing with disturbance magnitude. Co-contraction was characterized by projecting the muscle activation vector onto the null space of the EMG-to-force mapping. Even in the baseline the magnitude of the null space projection was larger than the minimum magnitude required for non-negative muscle activations. Moreover, the increase in co-contraction was not obtained by scaling the baseline null space projection, scaling the difference between the null space projections in any block and the projection of the non-negative minimum-norm muscle vector, or scaling the difference between the null space projections in the perturbed blocks and the baseline null space projection. However, the null space projections in the perturbed blocks were obtained by linear combination of the baseline null space projection and the muscle activation used to increase co-contraction without generating any force. The failure of scaling rul
Li Y, Ganesh G, Jarrasse N, et al., 2018, Force, impedance, and trajectory learning for contact tooling and haptic identification, IEEE Transactions on Robotics, Vol: 34, Pages: 1170-1182, ISSN: 1552-3098
Humans can skilfully use tools and interact with the environment by adapting their movement trajectory, contact force, and impedance. Motivated by the human versatility, we develop here a robot controller that concurrently adapts feedforward force, impedance, and reference trajectory when interacting with an unknown environment. In particular, the robot's reference trajectory is adapted to limit the interaction force and maintain it at a desired level, while feedforward force and impedance adaptation compensates for the interaction with the environment. An analysis of the interaction dynamics using Lyapunov theory yields the conditions for convergence of the closed-loop interaction mediated by this controller. Simulations exhibit adaptive properties similar to human motor adaptation. The implementation of this controller for typical interaction tasks including drilling, cutting, and haptic exploration shows that this controller can outperform conventional controllers in contact tooling.
ogrinc M, Farkhatdinov I, Walker R, et al., 2018, Sensory integration of apparent motion speed and vibration magnitude, IEEE Transactions on Haptics, Vol: 11, Pages: 455-463, ISSN: 1939-1412
Tactile apparent motion can display directional information in an intuitive way. It can for example be used to give directions to visually impaired individuals, or for waypoint navigation while cycling on busy streets, when vision or audition should not be loaded further. However, although humans can detect very short tactile patterns, discriminating between similar motion speeds has been shown to be difficult. Here we develop and investigate a method where the speed of tactile apparent motion around the user & #x0027;s wrist is coupled with vibration magnitude. This redundant coupling is used to produce tactile patterns from slow & amp;weak to fast & amp;strong. We compared the just noticeable difference (JND) of the coupled and the individual variables. The results show that the perception of the coupled variable can be characterised by JND smaller than JNDs of the individual variables. This allowed us to create short tactile pattens (tactons) for display of direction and speed, which can be distinguished significantly better than tactons based on motion alone. Additionally, most subjects were also able to identify the coupled-variable tactons better than the magnitude-based tactons.
Dall'Orso S, Steinweg J, Allievi AG, et al., 2018, Somatotopic mapping of the developing sensorimotor cortex in the preterm human brain, Cerebral Cortex, Vol: 28, Pages: 2507-2515, ISSN: 1047-3211
In the mature mammalian brain, the primary somatosensory and motor cortices are known to be spatially organized such that neural activity relating to specific body parts can be somatopically mapped onto an anatomical "homunculus". This organization creates an internal body representation which is fundamental for precise motor control, spatial awareness and social interaction. Although it is unknown when this organization develops in humans, animal studies suggest that it may emerge even before the time of normal birth. We therefore characterized the somatotopic organization of the primary sensorimotor cortices using functional MRI and a set of custom-made robotic tools in 35 healthy preterm infants aged from 31 + 6 to 36 + 3 weeks postmenstrual age. Functional responses induced by somatosensory stimulation of the wrists, ankles, and mouth had a distinct spatial organization as seen in the characteristic mature homunculus map. In comparison to the ankle, activation related to wrist stimulation was significantly larger and more commonly involved additional areas including the supplementary motor area and ipsilateral sensorimotor cortex. These results are in keeping with early intrinsic determination of a somatotopic map within the primary sensorimotor cortices. This may explain why acquired brain injury in this region during the preterm period cannot be compensated for by cortical reorganization and therefore can lead to long-lasting motor and sensory impairment.
Ogrinc Ms M, Farkhatdinov PhD I, Walker Ms R, et al., 2018, Horseback riding therapy for a deafblind individual enabled by a haptic interface, Assistive Technology: The Offical Journal of RESNA, Vol: 30, Pages: 143-150, ISSN: 1949-3614
We present a haptic interface to help deafblind people to practice horseback riding as a recreational and therapeutic activity. Horseback riding is a form of therapy which can improve self-esteem and sensation of independence. It has been shown to benefit people with various medical conditions-including autism. However, in the case of deafblind riders, an interpreter must stand by at all times to communicate with the rider by touch. We developed a simple interface that enables deafblind people to enjoy horseback riding while the instructor is remotely providing cues, which improves their independence. Experiments demonstrated that an autistic deafblind individual exhibits similar responses to navigational cues as an unimpaired rider. Motivation is an important factor in therapy, and is frequently determinant of its outcome; therefore, the user attitude toward the therapy methods is key. The answers to questionnaires filled by the rider, family, and the instructor show that our technique gives the rider a greater sense of independence and more joy compared to standard riding where the instructor is walking along with the horse.
Takagi A, Usai F, Ganesh G, et al., 2018, Haptic communication between humans is tuned by the hard or soft mechanics of interaction, PLoS Computational Biology, Vol: 14, ISSN: 1553-734X
To move a hard table together, humans may coordinate by following the dominant partner's motion [1-4], but this strategy is unsuitable for a soft mattress where the perceived forces are small. How do partners readily coordinate in such differing interaction dynamics? To address this, we investigated how pairs tracked a target using flexion-extension of their wrists, which were coupled by a hard, medium or soft virtual elastic band. Tracking performance monotonically increased with a stiffer band for the worse partner, who had higher tracking error, at the cost of the skilled partner's muscular effort. This suggests that the worse partner followed the skilled one's lead, but simulations show that the results are better explained by a model where partners share movement goals through the forces, whilst the coupling dynamics determine the capacity of communicable information. This model elucidates the versatile mechanism by which humans can coordinate during both hard and soft physical interactions to ensure maximum performance with minimal effort.
Bentley P, Burdet E, Rinne P, et al., 2018, A force measurement mechanism, 15544596
Mace M, Kinany N, Rinne P, et al., 2017, Balancing the playing field: collaborative gaming for physical training., Journal of NeuroEngineering and Rehabilitation, Vol: 14, ISSN: 1743-0003
BACKGROUND: Multiplayer video games promoting exercise-based rehabilitation may facilitate motor learning, by increasing motivation through social interaction. However, a major design challenge is to enable meaningful inter-subject interaction, whilst allowing for significant skill differences between players. We present a novel motor-training paradigm that allows real-time collaboration and performance enhancement, across a wide range of inter-subject skill mismatches, including disabled vs. able-bodied partnerships. METHODS: A virtual task consisting of a dynamic ball on a beam, is controlled at each end using independent digital force-sensing handgrips. Interaction is mediated through simulated physical coupling and locally-redundant control. Game performance was measured in 16 healthy-healthy and 16 patient-expert dyads, where patients were hemiparetic stroke survivors using their impaired arm. Dual-player was compared to single-player performance, in terms of score, target tracking, stability, effort and smoothness; and questionnaires probing user-experience and engagement. RESULTS: Performance of less-able subjects (as ranked from single-player ability) was enhanced by dual-player mode, by an amount proportionate to the partnership's mismatch. The more abled partners' performances decreased by a similar amount. Such zero-sum interactions were observed for both healthy-healthy and patient-expert interactions. Dual-player was preferred by the majority of players independent of baseline ability and subject group; healthy subjects also felt more challenged, and patients more skilled. CONCLUSION: This is the first demonstration of implicit skill balancing in a truly collaborative virtual training task leading to heightened engagement, across both healthy subjects and stroke patients.
Zhou S-H, Tan Y, Oetomo D, et al., 2017, Modeling of Endpoint Feedback Learning Implemented Through Point-to-Point Learning Control, IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY, Vol: 25, Pages: 1576-1585, ISSN: 1063-6536
Farkhatdinov I, Roehri N, Burdet E, 2017, Anticipatory detection of turning in humans for intuitive control of robotic mobility assistance, Bioinspiration and Biomimetics, Vol: 12, ISSN: 1748-3182
Many wearable lower-limb robots for walking assistance have been developed in recent years. However, it remains unclear how they can be commanded in an intuitive and efficient way by their user. In particular, providing robotic assistance to neurologically impaired individuals in turning remains a significant challenge. The control should be safe to the users and their environment, yet yield sufficient performance and enable natural human-machine interaction. Here, we propose using the head and trunk anticipatory behaviour in order to detect the intention to turn in a natural, non-intrusive way, and use it for triggering turning movement in a robot for walking assistance. We therefore study head and trunk orientation during locomotion of healthy adults, and investigate upper body anticipatory behaviour during turning. The collected walking and turning kinematics data are clustered using the k-means algorithm and cross-validation tests and k-nearest neighbours method are used to evaluate the performance of turning detection during locomotion. Tests with seven subjects exhibited accurate turning detection. Head anticipated turning by more than 400–500 ms in average across all subjects. Overall, the proposed method detected turning 300 ms after its initiation and 1230 ms before the turning movement was completed. Using head anticipatory behaviour enabled to detect turning faster by about 100 ms, compared to turning detection using only pelvis orientation measurements. Finally, it was demonstrated that the proposed turning detection can improve the quality of human–robot interaction by improving the control accuracy and transparency.
Hussain A, Balasubramanian S, Roach N, et al., 2017, SITAR: a system for independent task-oriented assessment and rehabilitation, Journal of Rehabilitation and Assistive Technologies Engineering, Vol: 4, Pages: 2055668317729637-2055668317729637, ISSN: 2055-6683
Introduction: Over recent years, task-oriented training has emerged as a dominant approach in neurorehabilitation. This article presents a novel, sensor-based system for independent task-oriented assessment and rehabilitation (SITAR) of the upper limb. Methods: The SITAR is an ecosystem of interactive devices including a touch and force-sensitive tabletop and a set of intelligent objects enabling functional interaction. In contrast to most existing sensor-based systems, SITAR provides natural training of visuomotor coordination through collocated visual and haptic workspaces alongside multimodal feedback, facilitating learning and its transfer to real tasks. We illustrate the possibilities offered by the SITAR for sensorimotor assessment and therapy through pilot assessment and usability studies. Results: The pilot data from the assessment study demonstrates how the system can be used to assess different aspects of upper limb reaching, pick-and-place and sensory tactile resolution tasks. The pilot usability study indicates that patients are able to train arm-reaching movements independently using the SITAR with minimal involvement of the therapist and that they were motivated to pursue the SITAR-based therapy. Conclusion: SITAR is a versatile, non-robotic tool that can be used to implement a range of therapeutic exercises and assessments for different types of patients, which is particularly well-suited for task-oriented training.
Abdi E, Bouri M, Burdet E, et al., 2017, Positioning the endoscope in laparoscopic surgery by foot: Influential factors on surgeons' performance in virtual trainer, 39th Annual International Conference of the IEEE-Engineering-in-Medicine-and-Biology-Society (EMBC), Publisher: IEEE, Pages: 3944-3948, ISSN: 1094-687X
Mace M, Guy S, Hussain A, et al., 2017, Validity of a sensor-based table-top platform to measure upper limb function, International Conference on Rehabilitation Robotics (ICORR), Publisher: IEEE, Pages: 652-657, ISSN: 1945-7898
Arami A, Tagliamonte NL, Tamburella F, et al., 2017, A simple tool to measure spasticity in spinal cord injury subjects., 2017 International Conference on Rehabilitation Robotics (ICORR), Publisher: IEEE
This work presents a wearable device and the algorithms for quantitative modelling of joint spasticity and its application in a pilot group of subjects with different levels of spinal cord injury. The device comprises light-weight instrumented handles to measure the interaction force between the subject and the physical therapist performing the tests, EMG sensors and inertial measurement units to measure muscle activity and joint kinematics. Experimental tests included the passive movement of different body segments, where the spasticity was expected, at different velocities. Tonic stretch reflex thresholds and their velocity modulation factor are computed, as a quantitative index of spasticity, by using the kinematics data at the onset of spasm detected through thresholding the EMG data. This technique was applied to two spinal cord injury subjects. The proposed method allowed the analysis of spasticity at muscle and joint levels. The obtained results are in line with the expert diagnosis and qualitative spasticity characterisation on each individual.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.