Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

233 results found

Lin C-H, Faisal AA, 2018, Decomposing sensorimotor variability changes in ageing and their connection to falls in older people, Scientific Reports, Vol: 8, ISSN: 2045-2322

The relationship between sensorimotor variability and falls in older people has not been well investigated. We developed a novel task having shared biomechanics of obstacle negotiation to quantify sensorimotor variability related to locomotion across age. We found that sensorimotor variability in foot placement increases continuously with age. We then applied sensory psychophysics to pinpoint the visual and somatosensory systems associated with sensorimotor variability. We showed increased sensory variability, specifically increased proprioceptive variability, the vital cause of more variable foot placement in older people (greater than 65 years). Notably, older participants relied more on the vision to judge their own foot’s height compared to the young, suggesting a shift in multisensory integration strategy to compensate for degenerated proprioception. We further modelled the probability of tripping-over based on the relationship between sensorimotor variability and age and found a correspondence between model prediction and community-based data. We reveal increased sensorimotor variability, modulated by sensation precision, a potentially vital mechanism of raised tripping-over and thus fall events in older people. Analysis of sensorimotor variability and its specific components, may have the utility of fall risk and rehabilitation target evaluation.

Journal article

Shafti A, Orlov P, Faisal AA, 2018, Gaze-based, context-aware robotic system for assisted reaching and grasping

Assistive robotic systems endeavour to support those with movementdisabilities, enabling them to move again and regain functionality. Main issuewith these systems is the complexity of their low-level control, and how totranslate this to simpler, higher level commands that are easy and intuitivefor a human user to interact with. We have created a multi-modal system,consisting of different sensing, decision making and actuating modalities,leading to intuitive, human-in-the-loop assistive robotics. The system takesits cue from the user's gaze, to decode their intentions and implementlow-level motion actions to achieve high-level tasks. This results in the usersimply having to look at the objects of interest, for the robotic system toassist them in reaching for those objects, grasping them, and using them tointeract with other objects. We present our method for 3D gaze estimation, andgrammars-based implementation of sequences of action with the robotic system.The 3D gaze estimation is evaluated with 8 subjects, showing an overallaccuracy of $4.68\pm0.14cm$. The full system is tested with 5 subjects, showingsuccessful implementation of $100\%$ of reach to gaze point actions and fullimplementation of pick and place tasks in 96\%, and pick and pour tasks in$76\%$ of cases. Finally we present a discussion on our results and what futurework is needed to improve the system.

Working paper

Li M, Songur N, Orlov P, Leutenegger S, Faisal AAet al., 2018, Towards an embodied semantic fovea: Semantic 3D scene reconstruction from ego-centric eye-tracker videos

Incorporating the physical environment is essential for a completeunderstanding of human behavior in unconstrained every-day tasks. This isespecially important in ego-centric tasks where obtaining 3 dimensionalinformation is both limiting and challenging with the current 2D video analysismethods proving insufficient. Here we demonstrate a proof-of-concept systemwhich provides real-time 3D mapping and semantic labeling of the localenvironment from an ego-centric RGB-D video-stream with 3D gaze pointestimation from head mounted eye tracking glasses. We augment existing work inSemantic Simultaneous Localization And Mapping (Semantic SLAM) with collectedgaze vectors. Our system can then find and track objects both inside andoutside the user field-of-view in 3D from multiple perspectives with reasonableaccuracy. We validate our concept by producing a semantic map from images ofthe NYUv2 dataset while simultaneously estimating gaze position and gazeclasses from recorded gaze data of the dataset images.

Working paper

Subramanian M, Shafti A, Faisal A, 2018, Mechanomyography based closed-loop Functional Electrical Stimulation cycling system, BioRob 2018- IEEE International Conference on Biomedical Robotics and Biomechatronics

Conference paper

Auepanwiriyakul C, Harston A, Orlov P, Shafti A, Faisal AAet al., 2018, Semantic Fovea: Real-time annotation of ego-centric videos with gaze context, ACM Symposium on Eye Tracking Research and Applications (ETRA), Publisher: ASSOC COMPUTING MACHINERY

Conference paper

Orlov P, Shafti A, Auepanwiriyakul C, Songur N, Faisal AAet al., 2018, A Gaze-Contingent Intention Decoding Engine for human augmentation, ACM Symposium on Eye Tracking Research and Applications (ETRA), Publisher: ASSOC COMPUTING MACHINERY

Conference paper

Li L, Komorowski M, Faisal AA, 2018, The actor search tree critic (ASTC) for off-policy POMDP learning in medical decision making

Off-policy reinforcement learning enables near-optimal policy from suboptimalexperience, thereby provisions opportunity for artificial intelligenceapplications in healthcare. Previous works have mainly framed patient-clinicianinteractions as Markov decision processes, while true physiological states arenot necessarily fully observable from clinical data. We capture this situationwith partially observable Markov decision process, in which an agent optimisesits actions in a belief represented as a distribution of patient statesinferred from individual history trajectories. A Gaussian mixture model isfitted for the observed data. Moreover, we take into account the fact thatnuance in pharmaceutical dosage could presumably result in significantlydifferent effect by modelling a continuous policy through a Gaussianapproximator directly in the policy space, i.e. the actor. To address thechallenge of infinite number of possible belief states which renders exactvalue iteration intractable, we evaluate and plan for only every encounteredbelief, through heuristic search tree by tightly maintaining lower and upperbounds of the true value of belief. We further resort to functionapproximations to update value bounds estimation, i.e. the critic, so that thetree search can be improved through more compact bounds at the fringe nodesthat will be back-propagated to the root. Both actor and critic parameters arelearned via gradient-based approaches. Our proposed policy trained from realintensive care unit data is capable of dictating dosing on vasopressors andintravenous fluids for sepsis patients that lead to the best patient outcomes.

Working paper

Teh T, Auepanwiriyakul C, Harston JA, Faisal AAet al., 2018, Generalised Structural CNNs (SCNNs) for time series data with arbitrary graph topology

Deep Learning methods, specifically convolutional neural networks (CNNs),have seen a lot of success in the domain of image-based data, where the dataoffers a clearly structured topology in the regular lattice of pixels. This4-neighbourhood topological simplicity makes the application of convolutionalmasks straightforward for time series data, such as video applications, butmany high-dimensional time series data are not organised in regular lattices,and instead values may have adjacency relationships with non-trivialtopologies, such as small-world networks or trees. In our application case,human kinematics, it is currently unclear how to generalise convolutionalkernels in a principled manner. Therefore we define and implement here aframework for general graph-structured CNNs for time series analysis. Ouralgorithm automatically builds convolutional layers using the specifiedadjacency matrix of the data dimensions and convolutional masks that scale withthe hop distance. In the limit of a lattice-topology our method produces thewell-known image convolutional masks. We test our method first on syntheticdata of arbitrarily-connected graphs and human hand motion capture data, wherethe hand is represented by a tree capturing the mechanical dependencies of thejoints. We are able to demonstrate, amongst other things, that inclusion of thegraph structure of the data dimensions improves model prediction significantly,when compared against a benchmark CNN model with only time convolution layers.

Working paper

Lluch Hernandez A, Hernando Melia C, 2018, Foreword, FUTURE ONCOLOGY, Vol: 14, ISSN: 1479-6694

Journal article

Ortega P, Colas C, Faisal A, 2018, Convolutional neural network, personalised, closed-loop Brain-Computer Interfaces for multi-way control mode switching in real-time

<jats:title>Abstract</jats:title><jats:p>Exoskeletons and robotic devices are for many motor disabled people the only way to interact with their envi-ronment. Our lab previously developed a gaze guided assistive robotic system for grasping. It is well known that the same natural task can require different interactions described by different dynamical systems that would require different robotic controllers and their selection by the user in a self paced way. Therefore, we investigated different ways to achieve transitions between multiple states, finding that eye blinks were the most reliable to transition from ‘off’ to ‘control’ modes (binary classification) compared to voice and electromyography. In this paper be expanded on this work by investigating brain signals as sources for control mode switching. We developed a Brain Computer Interface (BCI) that allows users to switch between four control modes in self paced way in real time. Since the system is devised to be used in domestic environments in a user friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (ConvNets), known by their capability to find the optimal features for a classification task, which we hypothesised would add flexibility to the system in terms of which mental activities the user could perform to control it. We tested our system using the Cybathlon BrainRunners computer game, which represents all the challenges inherent to real time control. Our preliminary results show that an efficient architecture (SmallNet) composed by a convolutional layer, a fully connected layer and a sigmoid classification layer, is able to classify 4 mental activities that the user chose to perform. For his preferred mental activities, we run and validated the system online and retrained the system using online collected EEG data. We achieved 47, 6% accuracy in online operation in the 4-way classification task. In part

Working paper

Raymond L-A, Piccini M, Subramanian M, Pavel O, Zito G, Faisal Aet al., 2018, Natural Gaze Data-Driven Wheelchair

<jats:title>Abstract</jats:title><jats:p>Natural eye movements during navigation have long been considered to reflect planning processes and link to user’s future action intention. We investigate here whether natural eye movements during joystick-based navigation of wheel-chairs follow identifiable patterns that are predictive of joystick actions. To place eye movements in context with driving intentions, we combine our eye tracking with a 3D depth camera system, which allows us to identify which eye movements have the floor as gaze target and distinguish them from other non-navigation related eye movements. We find consistent patterns of eye movements on the floor predictive of steering commands issued by the driver in all subjects. Based on this empirical data we developed two gaze decoders using supervised machine learning techniques and enabled each of these drivers to then steer the wheelchair by imagining they were using a joystick to trigger appropriate natural eye movements via motor imagery. We show that all subjects are able to navigate their wheelchair “by eye” learning it within a short time span of minutes. Our work shows that simple gaze-based decoding without need for artificial user interfaces suffices to restore mobility and increasing participation in daily life.</jats:p>

Journal article

Ponferrada EG, Sylaidi A, Aldo Faisal A, 2018, Data-efficient motor imagery decoding in real-time for the cybathlon brain-computer interface race, Pages: 21-32

Neuromotor diseases such as Amyotrophic Lateral Sclerosis or Multiple Sclerosis affect millions of people throughout the globe by obstructing body movement and thereby any instrumental interaction with the world. Brain Computer Interfaces (BCIs) hold the premise of re-routing signals around the damaged parts of the nervous system to restore control. However, the field still faces open challenges in training and practical implementation for real-time usage which hampers its impact on patients. The Cybathlon Brain-Computer Interface Race promotes the development of practical BCIs to facilitate clinical adoption. In this work we present a competitive and data-efficient BCI system to control the Cybathlon video game using motor imageries. The platform achieves substantial performance while requiring a relatively small amount of training data, thereby accelerating the training phase. We employ a static band-pass filter and Common Spatial Patterns learnt using supervised machine learning techniques to enable the discrimination between different motor imageries. Log-variance features are extracted from the spatio-temporally filtered EEG signals to fit a Logistic Regression classifier, obtaining satisfying levels of decoding accuracy. The systems performance is evaluated online, on the first version of the Cybathlon Brain Runners game, controlling 3 commands with up to 60.03% accuracy using a two-step hierarchical classifier.

Conference paper

Cunningham J, Hapsari A, Guilleminot P, Shafti A, Faisal AAet al., 2018, The Supernumerary Robotic 3rd Thumb for Skilled Music Tasks, Publisher: IEEE

Working paper

Maymo MR, Shafti A, Faisal AA, 2018, FastOrient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping, Publisher: IEEE

Working paper

Lin C-H, Faisal AA, 2017, The role of sensorimotor variability and computation in elderly’s falls

<jats:title>ABSTRACT</jats:title><jats:p>The relationship between sensorimotor variability and falls in elderly has not been well investigated. We designed and used a motor task having shared biomechanics of walking and obstacle negotiation to quantify sensorimotor variability related to locomotion across age. We also applied sensory psychophysics to pinpoint specific sensory systems associated with sensorimotor variability. We found that sensorimotor variability in foot placement increases continuously with age. We further showed that increased sensory variability, specifically increased proprioceptive variability, the vital cause of more variable foot placement in the elderly. Notably, elderly participants relied more on the vision to judge their own foot’s height compared to the young, suggesting a shift in multisensory integration strategy to compensate for degenerated proprioception. We further modelled the probability of tripping-over based on the relationship between sensorimotor variability and age and found a good correspondence between model prediction and community-based data. We revealed increased sensorimotor variability, modulated by sensation precision, a potentially vital mechanism of raised tripping-over and thus fall events in the elderly. Therefore, our tasks, which quantify sensorimotor variability, can be used for trip-over probability assessment and, with adjustments, potentially applied as a training program to mitigate trip-over risk.</jats:p>

Journal article

Faisal A, 2017, Computer science: Visionary of virtual reality, Nature, Vol: 551, Pages: 298-299, ISSN: 0028-0836

Journal article

Xiloyannis M, Gavriel C, Thomik AA, Faisal AAet al., 2017, Gaussian process autoregression for simultaneous proportional multi-modal prosthetic control with natural hand kinematics, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 25, Pages: 1785-1801, ISSN: 1534-4320

Matching the dexterity, versatility, and robustness of the human hand is still an unachieved goal in bionics, robotics, and neural engineering. A major limitation for hand prosthetics lies in the challenges of reliably decoding user intention from muscle signals when controlling complex robotic hands. Most of the commercially available prosthetic hands use muscle-related signals to decode a finite number of predefined motions and some offer proportional control of open/close movements of the whole hand. Here, in contrast, we aim to offer users flexible control of individual joints of their artificial hand. We propose a novel framework for decoding neural information that enables a user to independently control 11 joints of the hand in a continuous manner-much like we control our natural hands. Toward this end, we instructed six able-bodied subjects to perform everyday object manipulation tasks combining both dynamic, free movements (e.g., grasping) and isometric force tasks (e.g., squeezing). We recorded the electromyographic and mechanomyographic activities of five extrinsic muscles of the hand in the forearm, while simultaneously monitoring 11 joints of hand and fingers using a sensorized data glove that tracked the joints of the hand. Instead of learning just a direct mapping from current muscle activity to intended hand movement, we formulated a novel autoregressive approach that combines the context of previous hand movements with instantaneous muscle activity to predict future hand movements. Specifically, we evaluated a linear vector autoregressive moving average model with exogenous inputs and a novel Gaussian process (gP) autoregressive framework to learn the continuous mapping from hand joint dynamics and muscle activity to decode intended hand movement. Our gP approach achieves high levels of performance (RMSE of 8°/s and ρ = 0.79). Crucially, we use a small set of sensors that allows us to control a larger set of independently actuated degrees of

Journal article

Pitchforth J, Iodice M, Main M, Dziemian S, Faisal A, Bergsma A, Muntoni Eet al., 2017, A clinical update on the eNHANCE project: Eye tracking control for reaching and grasping in an adolescent Duchenne muscular dystrophy (DMD) population, 22nd International Annual Congress of the World-Muscle-Society (WMS), Publisher: PERGAMON-ELSEVIER SCIENCE LTD, Pages: S236-S236, ISSN: 0960-8966

Conference paper

Noronha B, Dziemian S, Zito GA, Konnaris C, Faisal AAet al., 2017, "Wink to grasp" – comparing eye, voice & EMG gesture control of grasp with soft-robotic gloves, IEEE Conference on Rehabilitation Robotics (ICORR 2017), Publisher: IEEE, Pages: 1043-1048

The ability of robotic rehabilitation devices to support paralysed end-users is ultimately limited by the degree to which human-machine-interaction is designed to be effective and efficient in translating user intention into robotic action. Specifically, we evaluate the novel possibility of binocular eye-tracking technology to detect voluntary winks from involuntary blink commands, to establish winks as a novel low-latency control signal to trigger robotic action. By wearing binocular eye-tracking glasses we enable users to directly observe their environment or the actuator and trigger movement actions, without having to interact with a visual display unit or user interface. We compare our novel approach to two conventional approaches for controlling robotic devices based on electromyo-graphy (EMG) and speech-based human-computer interaction technology. We present an integrated software framework based on ROS that allows transparent integration of these multiple modalities with a robotic system. We use a soft-robotic SEM glove (Bioservo Technologies AB, Sweden) to evaluate how the 3 modalities support the performance and subjective experience of the end-user when movement assisted. All 3 modalities are evaluated in streaming, closed-loop control operation for grasping physical objects. We find that wink control shows the lowest error rate mean with lowest standard deviation of (0.23 ± 0.07, mean ± SEM) followed by speech control (0.35 ± 0. 13) and EMG gesture control (using the Myo armband by Thalamic Labs), with the highest mean and standard deviation (0.46 ± 0.16). We conclude that with our novel own developed eye-tracking based approach to control assistive technologies is a well suited alternative to conventional approaches, especially when combined with 3D eye-tracking based robotic end-point control.

Conference paper

Maimon-Mor RO, Fernandez-Quesada J, Zito GA, Konnaris C, Dziemian S, Faisal AAet al., 2017, Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking, 15th IEEE Conference on Rehabilitation Robotics (ICORR 2017), Publisher: IEEE, Pages: 1049-1054

Eye-movements are the only directly observable behavioural signals that are highly correlated with actions at the task level, and proactive of body movements and thus reflect action intentions. Moreover, eye movements are preserved in many movement disorders leading to paralysis (or amputees) from stroke, spinal cord injury, Parkinson's disease, multiple sclerosis, and muscular dystrophy among others. Despite this benefit, eye tracking is not widely used as control interface for robotic interfaces in movement impaired patients due to poor human-robot interfaces. We demonstrate here how combining 3D gaze tracking using our GT3D binocular eye tracker with custom designed 3D head tracking system and calibration method enables continuous 3D end-point control of a robotic arm support system. The users can move their own hand to any location of the workspace by simple looking at the target and winking once. This purely eye tracking based system enables the end-user to retain free head movement and yet achieves high spatial end point accuracy in the order of 6 cm RMSE error in each dimension and standard deviation of 4 cm. 3D calibration is achieved by moving the robot along a 3 dimensional space filling Peano curve while the user is tracking it with their eyes. This results in a fully automated calibration procedure that yields several thousand calibration points versus standard approaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.

Conference paper

Delgado PG, Reynolds R, Faisal A, 2017, Axo-glial pathology in multiple sclerosis and its effects on neurotransmission, ISN-ESN Meeting, Publisher: WILEY, Pages: 96-96, ISSN: 0022-3042

Conference paper

Iyer R, Ungless M, Faisal AA, 2017, Calcium-activated SK channels control firing regularity by modulating sodium channel availability in midbrain dopamine neurons, Scientific Reports, Vol: 2017, ISSN: 2045-2322

Dopamine neurons in the substantia nigra pars compacta and ventral tegmental area regulate behaviours such as reward-related learning, and motor control. Dysfunction of these neurons is implicated in Schizophrenia, addiction to drugs, and Parkinson’s disease. While some dopamine neurons fire single spikes at regular intervals, others fire irregular single spikes interspersed with bursts. Pharmacological inhibition of calcium-activated potassium (SK) channels increases the variability in their firing pattern, sometimes also increasing the number of spikes fired in bursts, indicating that SK channels play an important role in maintaining dopamine neuron firing regularity and burst firing. However, the exact mechanisms underlying these effects are still unclear. Here, we develop a biophysical model of a dopamine neuron incorporating ion channel stochasticity that enabled the analysis of availability of ion channels in multiple states during spiking. We find that decreased firing regularity is primarily due to a significant decrease in the AHP that in turn resulted in a reduction in the fraction of available voltage-gated sodium channels due to insufficient recovery from inactivation. Our model further predicts that inhibition of SK channels results in a depolarisation of action potential threshold along with an increase in its variability.

Journal article

Pedotti A, Luis A, Faisal A, 2017, Proceedings of the 5th International Congress on Neurotechnology, Electronics and Informatics 2017, Publisher: Scitepress, ISBN: 978-989-758-270-7

Book

Kotti M, Duffell LD, Faisal AA, McGregor AHet al., 2017, Detecting knee osteoarthritis and its discriminating parameters using random forests, Medical Engineering and Physics, Vol: 43, Pages: 19-29, ISSN: 1350-4533

This papertackles the problem of automatic detection of knee osteoarthritis. A computer system is built that takes as input the body kinetics and produces as output not only an estimation of presence of the knee osteoarthritis,as previouslydone inthe literature, but alsothe most discriminating parameters along with a set of rules on how this decision was reached.This fills the gap of interpretability between the medical and the engineering approaches. We collected locomotion data from 47 subjects with knee osteoarthritis and 47 healthy subjects.Osteoarthritis subjects were recruited from hospital clinics and GP surgeries, and age and sex matched heathy subjects from the local community. Subjects walked on a walkway equippedwith two force plates with piezoelectric 3-component force sensors. Parameters of the vertical, anterior-posterior, and medio-lateral ground reaction forces, such asmean value, push-off time, and slope,were extracted. Then random forest regressors map thoseparameters via rule induction to the degree of knee osteoarthritis.To boost generalisation ability,a subject-independent protocol is employed.The 5-fold cross-validated accuracy is 72.61%±4.24%. We show that with 3 steps or lessa reliable clinical measure can be extractedin a rule-based approachwhen the dataset is analysed appropriately.

Journal article

Makin TR, De Vignemont F, Faisal AA, 2017, Neurocognitive barriers to the embodiment of technology, Nature Biomedical Engineering, Vol: 1, ISSN: 2157-846X

Journal article

Pedotti A, Azevedo L, Faisal A, 2017, Foreword, Pages: VII-VIII, ISBN: 9789897582707

Book chapter

Faisal AA, Neishabouri A, 2016, Fundamental Constraints on the Evolution of Neurons, The Wiley-Blackwell Handbook of Evolutionary Neuroscience, Pages: 153-172, ISBN: 9781119994695

This chapter focuses on two fundamental constraints that apply to any form of information processing system, be it a cell, a brain or a computer: Noise (random variability) and Energy (metabolic demand). It shows how these two constraints are fundamentally limited by the basic biophysical properties of the brain's building blocks (protein, fats, and salty water) and link nervous system structure to function. The understanding of the interdependence of information and energy has profoundly influenced the development of efficient telecommunication systems and computers. Noise diminishes the capacity to receive, process, and direct information, the key tasks of the brain. Investing in the brain's design can reduce the effects of noise, but this investment often increases energetic requirements, which is likely to be evolutionary unfavourable. The stochasticity of the system becomes critical when its inherent randomness makes it operationally infeasible, that is, when random action potential (APs) become as common as evoked APs.

Book chapter

Makin T, de Vignemont F, Faisal AA, 2016, Neurocognitive considerations to the embodiment of technology, Nature Biomedical Engineering, ISSN: 2157-846X

By exploiting robotics and information technology, teams of biomedical engineers are enhancing human sensory and motor abilities. Such augmentation technology ― to be worn, implanted or ingested ― aims to both restore and improve existing human capabilities (such as faster running, via exoskeletons), and to add new ones (for example, a ‘radar sense’). The development of augmentation technology is driven by rapid advances in human–machine interfaces, energy storage and mobile computing. Although engineers are embracing body augmentation from a technical perspective, little attention has been devoted to how the human brain might support such technological innovation. In this Comment, we highlight expected neurocognitive bottlenecks imposed by brain plasticity, adaptation and learning that could impact the design and performance of sensory and motor augmentation technology. We call for further consideration of how human–machine integration can be best achieved.

Journal article

Corrales-Carvajal VM, Faisal AA, Ribeiro C, 2016, Internal states drive nutrient homeostatis by modulating exploration-exploitation trade-off, ELIFE, Vol: 5, ISSN: 2050-084X

Journal article

Konnaris C, Gavriel C, Thomik AAC, Aldo Faisal Aet al., 2016, EthoHand: A dexterous robotic hand with ball-joint thumb enables complex in-hand object manipulation, 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Publisher: IEEE, Pages: 1154-1159, ISSN: 2155-1774

Our dexterous hand is a fundmanetal human feature that distinguishes us from other animals by enabling us to go beyond grasping to support sophisticated in-hand object manipulation. Our aim was the design of a dexterous anthropomorphic robotic hand that matches the human hand's 24 degrees of freedom, under-actuated by seven motors. With the ability to replicate human hand movements in a naturalistic manner including in-hand object manipulation. Therefore, we focused on the development of a novel thumb and palm articulation that would facilitate in-hand object manipulation while avoiding mechanical design complexity. Our key innovation is the use of a tendon-driven ball joint as a basis for an articulated thumb. The design innovation enables our under-actuated hand to perform complex in-hand object manipulation such as passing a ball between the fingers or even writing text messages on a smartphone with the thumb's end-point while holding the phone in the palm of the same hand. We then proceed to compare the dexterity of our novel robotic hand design to other designs in prosthetics, robotics and humans using simulated and physical kinematic data to demonstrate the enhanced dexterity of our novel articulation exceeding previous designs by a factor of two. Our innovative approach achieves naturalistic movement of the human hand, without requiring translation in the hand joints, and enables teleoperation of complex tasks, such as single (robot) handed messaging on a smartphone without the need for haptic feedback. Our simple, under-actuated design outperforms current state-of-the-art prostheses or robotic and prosthetic hands regarding abilities that encompass from grasps to activities of daily living which involve complex in-hand object manipulation.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00539811&person=true&page=4&respub-action=search.html