Imperial College London

Dr A. Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Reader in Neurotechnology
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

164 results found

Haar S, Sundar G, Faisal AA, 2020, Embodied virtual reality for the study of real-world motor learning

<jats:title>Abstract</jats:title><jats:sec><jats:title>Background</jats:title><jats:p>The motor learning literature focuses on relatively simple laboratory-tasks due to their highly controlled manner and the ease to apply different manipulations to induce learning and adaptation. In recent work we introduced a billiards paradigm and demonstrated the feasibility of real-world neuroscience using wearables for naturalistic full-body motion tracking and mobile brain imaging. Here we developed an embodied virtual reality (VR) environment to our real-world billiards paradigm, which allows us to control the visual feedback for this complex real-world task, while maintaining the sense of embodiment.</jats:p></jats:sec><jats:sec><jats:title>Methods</jats:title><jats:p>The setup was validated by comparing real-world ball trajectories with the embodied VR trajectories, calculated by the physics engine. We then ran our real-world learning protocol in the embodied VR. 10 healthy human subjects played repeated trials of the same billiard shot when they held the physical cue and hit a physical ball on the table while seeing it all in VR.</jats:p></jats:sec><jats:sec><jats:title>Results</jats:title><jats:p>We found comparable learning trends in the embodied VR to those we previously reported in the real-world task.</jats:p></jats:sec><jats:sec><jats:title>Conclusions</jats:title><jats:p>Embodied VR can be used for learning real-world tasks in a highly controlled VR environment which enables applying visual manipulations, common in laboratory-tasks and in rehabilitation, to a real-world full-body task. Such a setup can be used for rehabilitation, where the use of VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment. The embodied VR enables to manipulate feedback and apply

Journal article

Haar S, Faisal AA, 2020, Neural biomarkers of multiple motor-learning mechanisms in a real-world task

<jats:title>Abstract</jats:title><jats:p>Many recent studies found signatures of motor learning in neural beta oscillations (13–30Hz), and specifically in the post-movement beta rebound (PMBR). All these studies were in simplified laboratory-tasks in which learning was either error-based or reward-based. Interestingly, these studies reported opposing dynamics of the PMBR magnitude over learning for the error-based and reward-based tasks (increase verses decrease, respectively). Here we explored the PMBR dynamics during real-world motor-skill-learning in a billiards task using mobile-brain-imaging. Our EEG recordings highlight opposing dynamics of PMBR magnitudes between different subjects performing the same task. The groups of subjects, defined by their neural-dynamics, also showed behavioral differences expected for error-based verses reward-based learning. Our results suggest that when faced with the complexity of the real-world different subjects might use different learning mechanisms for the same complex task. We speculate that all subjects combine multi-modal mechanisms of learning, but different subjects have different predominant learning mechanisms.</jats:p>

Journal article

Abbott W, Harston J, Faisal A, Linear Embodied Saliency: a Model of Full-Body Kinematics-based Visual Attention, bioRxiv

Linear Embodied Saliency: a Model of Full-Body Kinematics-based Visual Attention

Journal article

Deisenroth MP, Faisal AA, Ong CS, 2020, Mathematics for Machine Learning, Publisher: Cambridge University Press, ISBN: 9781108455145

Book

Beyret B, Shafti SA, Faisal A, 2020, Dot-to-dot: explainable hierarchical reinforcement learning for robotic manipulation, IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1-6, ISSN: 2153-0866

Robotic systems are ever more capable of automationand fulfilment of complex tasks, particularly withreliance on recent advances in intelligent systems, deep learningand artificial intelligence in general. However, as robots andhumans come closer together in their interactions, the matterof interpretability, or explainability of robot decision-makingprocesses for the human grows in importance. A successfulinteraction and collaboration would only be possible throughmutual understanding of underlying representations of theenvironment and the task at hand. This is currently a challengein deep learning systems. We present a hierarchical deepreinforcement learning system, consisting of a low-level agenthandling the large actions/states space of a robotic systemefficiently, by following the directives of a high-level agent whichis learning the high-level dynamics of the environment and task.This high-level agent forms a representation of the world andtask at hand that is interpretable for a human operator. Themethod, which we call Dot-to-Dot, is tested on a MuJoCo-basedmodel of the Fetch Robotics Manipulator, as well as a ShadowHand, to test its performance. Results show efficient learningof complex actions/states spaces by the low-level agent, and aninterpretable representation of the task and decision-makingprocess learned by the high-level agent.

Conference paper

Lima IR, Haar S, Di Grassi L, Faisal AAet al., 2019, Neurobehavioural signatures in race car driving

<jats:title>ABSTRACT</jats:title><jats:p>Recent technological developments in mobile brain and body imaging are enabling new frontiers of real-world neuroscience. Simultaneous recordings of body movement and brain activity from highly skillful individuals as they demonstrate their exceptional skills in real-world settings, can shed new light on neurobehavioural structure of human expertise. Driving is a real-world skill which many of us acquire on different levels of expertise. Here we ran a case-study on a subject with the highest level of driving expertise - a Formula E Champion. We studied the expert driver’s neural and motor patterns while he drove a sports car in the “Top Gear” race track under extreme conditions (high speed, low visibility, low temperature, wet track). His brain activity, eye movements and hand/foot movements were recorded. Brain activity in the delta, alpha, and beta frequency bands showed causal relation to hand movements. We demonstrate, here in summary, that even in extreme situations (race track driving) a method for conducting human ethomic (Ethology + Omics) data that encompasses information on the sensory inputs and motor outputs outputs of the brain as well as brain state to characterise complex human skills.</jats:p>

Journal article

Faisal A, Hermano K, Antonio P, 2019, Proceedings of the 3rd International Congress on Neurotechnology, Electronics and Informatics, Setúbal, Publisher: Scitepress, ISBN: 978-989-758-161-8

Book

Hermano K, Pedotti A, Faisal A, 2019, Proceedings of the 4th International Congress on Neurotechnology, Electronics and Informatics 2016, ISBN: 978-989-758-204-2

Book

Khwaja M, Vaid SS, Zannone S, Harari GM, Faisal A, Matic Aet al., 2019, Modeling personality vs. modeling personalidad: In-the-wild mobile data analysis in five countries suggests cultural impact on personality models, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, Vol: 3, Pages: 1-24, ISSN: 2474-9567

Sensor data collected from smartphones provides the possibility to passively infer a user’s personality traits. Such models canbe used to enable technology personalization, while contributing to our substantive understanding of how human behaviormanifests in daily life. A significant challenge in personality modeling involves improving the accuracy of personalityinferences, however, research has yet to assess and consider the cultural impact of users’ country of residence on modelreplicability. We collected mobile sensing data and self-reported Big Five traits from 166 participants (54 women and 112men) recruited in five different countries (UK, Spain, Colombia, Peru, and Chile) for 3 weeks. We developed machine learningbased personality models using culturally diverse datasets - representing different countries - and we show that such modelscan achieve state-of-the-art accuracy when tested in new countries, ranging from 63% (Agreeableness) to 71% (Extraversion)of classification accuracy. Our results indicate that using country-specific datasets can improve the classification accuracybetween 3% and 7% for Extraversion, Agreeableness, and Conscientiousness. We show that these findings hold regardless ofgender and age balance in the dataset. Interestingly, using gender- or age- balanced datasets as well as gender-separateddatasets improve trait prediction by up to 17%. We unpack differences in personality models across the five countries, highlightthe most predictive data categories (location, noise, unlocks, accelerometer), and provide takeaways to technologists andsocial scientists interested in passive personality assessment.

Journal article

Khwaja M, Ferrer M, Jesus I, Faisal A, Matic Aet al., Aligning daily activities with personality: towards a recommender system for improving wellbeing, ACM Conference on Recommender Systems (RecSys), Publisher: ACM

Recommender Systems have not been explored to a great extentfor improving health and subjective wellbeing. Recent advances inmobile technologies and user modelling present the opportunityfor delivering such systems, however the key issue is understand-ing the drivers of subjective wellbeing at an individual level. Inthis paper we propose a novel approach for deriving personalizedactivity recommendations to improve subjective wellbeing by maxi-mizing the congruence between activities and personality traits. Toevaluate the model, we leveraged a rich dataset collected in a smart-phone study, which contains three weeks of daily activity probes,the Big-Five personality questionnaire and subjective wellbeingsurveys. We show that the model correctly infers a range of activ-ities that are ’good’ or ’bad’ (i.e. that are positively or negativelyrelated to subjective wellbeing) for a given user and that the derivedrecommendations greatly match outcomes in the real-world.

Conference paper

Subramanian M, Songur N, Adjei D, Orlov P, Faisal Aet al., A.Eye Drive: gaze-based semi-autonomous wheelchair interface, 41st International Engineering in Medicine & Biology Society (EMBC 2019), Publisher: IEEE

Existing wheelchair control interfaces, such as sip & puff or screen based gaze-controlled cursors, are challenging for the severely disabled to navigate safely and independently as users continuously need tointeract with an interface during navigation. This putsa significant cognitive load on users and prevents them from interacting with the environment in other forms during navigation. We have combined eyetracking/gaze-contingent intention decoding with computervision context-awarealgorithms and autonomous navigation drawn fromself-driving vehicles to allow paralysed users to drive by eye, simply by decoding natural gaze about where the user wants to go: A.Eye Drive. Our “Zero UI” driving platform allows users to look and interact visually with at an objector destination of interest in their visual scene, and the wheelchairautonomously takes the user to the intended destination, while continuously updating the computed path for static and dynamic obstacles. This intention decoding technology empowers the end-user by promising more independence through their own agency.

Conference paper

Shafti SA, Orlov P, Faisal A, Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping, International Conference on Robotics and Automation 2019, Publisher: IEEE, ISSN: 2152-4092

Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move againand regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate thisto simpler, higher level commands that are easy and intuitivefor a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision makingand actuating modalities, to create intuitive, human-in-the-loopassistive robotics. The system takes its cue from the user’s gaze,to decode their intentions and implement lower-level motionactions and achieve higher level tasks. This results in the usersimply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We presentour method for 3D gaze estimation, and action grammars-basedimplementation of sequences of action through the robotic system. The 3D gaze estimation is evaluated with 8 subjects,showing an overall accuracy of 4.68±0.14cm. The full systemis tested with 5 subjects, showing successful implementation of 100% of reach to gaze point actions and full implementationof pick and place tasks in 96%, and pick and pour tasks in76% of cases. Finally we present a discussion on our results and what future work is needed to improve the system.

Conference paper

Haar S, van Assel CM, Faisal AA, 2019, Kinematic signatures of learning that emerge in a real-world motor skill task

<jats:title>Abstract</jats:title><jats:p>The neurobehavioral mechanisms of human motor-control and learning evolved in free behaving, real-life settings, yet to date is studied in simplified lab-based settings. We demonstrate the feasibility of real-world neuroscience, using wearables for naturalistic full-body motion-tracking and mobile brain-imaging, to study motor-learning in billiards. We highlight the similarities between motor-learning in-the-wild and classic toy-tasks in well-known features, such as multiple learning rates, and the relationship between task-related variability and motor learning. Studying in-the-wild learning enable looking at global observables of motor learning, as well as relating learning to mechanisms deduced from reductionist models. The analysis of the velocity profiles of all joints enabled in depth understanding of the structure of learning across the body. First, while most of the movement was done by the right arm, the entire body learned the task, as evident by the decrease in both inter- and intra-trial variabilities of various joints across the body over learning. Second, while over learning all subjects decreased their movement variability and the variability in the outcome (ball direction), subjects who were initially more variable were also more variable after learning, supporting the notion that movement variability is an individual trait. Lastly, when exploring the link between variability and learning over joints we found that only the variability in the right elbow supination shows significant correlation to learning. This demonstrates the relation between learning and variability: while learning leads to overall reduction in movement variability, only initial variability in specific task relevant dimensions can facilitate faster learning.</jats:p><jats:sec><jats:title>Author Summary</jats:title><jats:p>This study addresses a foundational problem in the neuroscience: studyin

Journal article

Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AAet al., 2019, Understanding the artificial intelligence clinician and optimal treatment strategies for sepsis in intensive care

In this document, we explore in more detail our published work (Komorowski,Celi, Badawi, Gordon, & Faisal, 2018) for the benefit of the AI in Healthcareresearch community. In the above paper, we developed the AI Clinician system,which demonstrated how reinforcement learning could be used to make usefulrecommendations towards optimal treatment decisions from intensive care data.Since publication a number of authors have reviewed our work (e.g. Abbasi,2018; Bos, Azoulay, & Martin-Loeches, 2019; Saria, 2018). Given the differenceof our framework to previous work, the fact that we are bridging two verydifferent academic communities (intensive care and machine learning) and thatour work has impact on a number of other areas with more traditionalcomputer-based approaches (biosignal processing and control, biomedicalengineering), we are providing here additional details on our recentpublication.

Working paper

Gottesman O, Johansson F, Komorowski M, Faisal A, Sontag D, Doshi-Velez F, Celi LAet al., 2019, Guidelines for reinforcement learning in healthcare, Nature Medicine, Vol: 25, Pages: 16-18, ISSN: 1078-8956

In this Comment, we provide guidelines for reinforcement learning for decisions about patient treatment that we hope will accelerate the rate at which observational cohorts can inform healthcare practice in a safe, risk-conscious manner.

Journal article

Peng X, Ding Y, Wihl D, Gottesman O, Komorowski M, Lehman L-WH, Ross A, Faisal A, Doshi-Velez Fet al., 2018, Improving Sepsis Treatment Strategies by Combining Deep and Kernel-Based Reinforcement Learning., AMIA 2018 Annual Symposium, Pages: 887-896

Sepsis is the leading cause of mortality in the ICU. It is challenging to manage because individual patients respond differently to treatment. Thus, tailoring treatment to the individual patient is essential for the best outcomes. In this paper, we take steps toward this goal by applying a mixture-of-experts framework to personalize sepsis treatment. The mixture model selectively alternates between neighbor-based (kernel) and deep reinforcement learning (DRL) experts depending on patient's current history. On a large retrospective cohort, this mixture-based approach outperforms physician, kernel only, and DRL-only experts.

Conference paper

Liu Y, Gottesman O, Raghu A, Komorowski M, Faisal AA, Doshi-Velez F, Brunskill Eet al., 2018, Representation Balancing MDPs for Off-Policy Policy Evaluation, Thirty-second Annual Conference on Neural Information Processing Systems (NIPS)

We study the problem of off-policy policy evaluation (OPPE) in RL. In contrastto prior work, we consider how to estimate both the individual policy value and average policy value accurately. We draw inspiration from recent work in causal reasoning, and propose a new finite sample generalization error bound for value estimates from MDP models. Using this upper bound as an objective, we develop a learning algorithm of an MDP model with a balanced representation, and show that our approach can yield substantially lower MSE in a common synthetic domain and on a challenging real-world sepsis management problem.

Conference paper

Parbhoo S, Gottesman O, Ross AS, Komorowski M, Faisal A, Bon I, Roth V, Doshi-Velez Fet al., 2018, Improving counterfactual reasoning with kernelised dynamic mixing models, PLoS ONE, Vol: 13, ISSN: 1932-6203

Simulation-based approaches to disease progression allow us to make counterfactual predictions about the effects of an untried series of treatment choices. However, building accurate simulators of disease progression is challenging, limiting the utility of these approaches for real world treatment planning. In this work, we present a novel simulation-based reinforcement learning approach that mixes between models and kernel-based approaches to make its forward predictions. On two real world tasks, managing sepsis and treating HIV, we demonstrate that our approach both learns state-of-the-art treatment policies and can make accurate forward predictions about the effects of treatments on unseen patients.

Journal article

Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal Aet al., 2018, The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care, Nature Medicine, Vol: 24, Pages: 1716-1720, ISSN: 1078-8956

Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals1–3, but the best treatment strategy remains uncertain. In particular, evidence suggests that current practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients1,4–6. To tackle this sequential decision-making problem, we developed a reinforcement learning agent, the artificial intelligence (AI) Clinician, which learns from data to predict patient dynamics given specific treatment decisions. Our agent extracted implicit knowledge from an amount of patient data that exceeds many-fold the life-time experience of human clinicians and learned optimal treatment by having analysed myriads of (mostly sub-optimal) treatment decisions. We demonstrate that the value of the AI Clinician’s selected treatment is on average reliably higher than the human clinicians. In a large validation cohort independent from the training data, mortality was lowest in patients where clinicians’ actual doses matched the AI policy. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.

Journal article

Ortega San Miguel P, Colas C, Faisal A, 2018, Compact convolutional neural networks for multi-class, personalised, Closed-loop EEG-BCI, 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2018), Publisher: IEEE

For many people suffering from motor disabilities,assistive devices controlled with only brain activity are theonly way to interact with their environment [1]. Naturaltasks often require different kinds of interactions, involvingdifferent controllers the user should be able to select in aself-paced way. We developed a Brain-Computer Interface(BCI) allowing users to switch between four control modesin a self-paced way in real-time. Since the system is devisedto be used in domestic environments in a user-friendly way,we selected non-invasive electroencephalographic (EEG) signalsand convolutional neural networks (CNNs), known for theirability to find the optimal features in classification tasks. Wetested our system using the Cybathlon BCI computer game,which embodies all the challenges inherent to real-time control.Our preliminary results show that an efficient architecture(SmallNet), with only one convolutional layer, can classify 4mental activities chosen by the user. The BCI system is run andvalidated online. It is kept up-to-date through the use of newlycollected signals along playing, reaching an online accuracyof47.6%where most approaches only report results obtainedoffline. We found that models trained with data collected onlinebetter predicted the behaviour of the system in real-time. Thissuggests that similar (CNN based) offline classifying methodsfound in the literature might experience a drop in performancewhen applied online. Compared to our previous decoder ofphysiological signals relying on blinks, we increased by a factor2 the amount of states among which the user can transit,bringing the opportunity for finer control of specific subtaskscomposing natural grasping in a self-paced way. Our resultsare comparable to those showed at the Cybathlon’s BCI Racebut further imp

Conference paper

Woods B, Subramanian M, Shafti A, Faisal AAet al., 2018, Mecbanomyograpby based closed-loop functional electrical stimulation cycling system, 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Publisher: IEEE, Pages: 179-184, ISSN: 2155-1774

Functional Electrical Stimulation (FES) systems are successful in restoring motor function and supporting paralyzed users. Commercially available FES products are open loop, meaning that the system is unable to adapt to changing conditions with the user and their muscles which results in muscle fatigue and poor stimulation protocols. This is because it is difficult to close the loop between stimulation and monitoring of muscle contraction using adaptive stimulation. FES causes electrical artefacts which make it challenging to monitor muscle contractions with traditional methods such as electromyography (EMG). We look to overcome this limitation by combining FES with novel mechanomyographic (MMG) sensors to be able to monitor muscle activity during stimulation in real time. To provide a meaningful task we built an FES cycling rig with a software interface that enabled us to perform adaptive recording and stimulation, and then combine this with sensors to record forces applied to the pedals using force sensitive resistors (FSRs); crank angle position using a magnetic incremental encoder and inputs from the user using switches and a potentiometer. We illustrated this with a closed-loop stimulation algorithm that used the inputs from the sensors to control the output of a programmable RehaStim 1 FES stimulator (Hasomed) in real-time. This recumbent bicycle rig was used as a testing platform for FES cycling. The algorithm was designed to respond to a change in requested speed (RPM) from the user and change the stimulation power (% of maximum current mA) until this speed was achieved and then maintain it.

Conference paper

Cunningham J, Hapsari A, Guilleminot P, Shafti S, Faisal AAet al., 2018, The supernumerary robotic 3rd thumb for skilled music tasks, The 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Publisher: IEEE

Wearable robotics bring the opportunity to augmenthuman capability and performance, be it through prosthetics,exoskeletons, or supernumerary robotic limbs. The latterconcept allows enhancing human performance and assistingthem in daily tasks. An important research question is, however,whether the use of such devices can lead to their eventualcognitive embodiment, allowing the user to adapt to them anduse them seamlessly as any other limb of their own. This paperdescribes the creation of a platform to investigate this. Oursupernumerary robotic 3rd thumb was created to augment pianoplaying, allowing a pianist to press piano keys beyond theirnatural hand-span; thus leading to functional augmentation oftheir skills and the technical feasibility to play with 11 fingers.The robotic finger employs sensors, motors, and a humaninterfacing algorithm to control its movement in real-time. Aproof of concept validation experiment has been conducted toshow the effectiveness of the robotic finger in playing musicalpieces on a grand piano, showing that naive users were able touse it for 11 finger play within a few hours.

Conference paper

Ruiz Maymo M, Shafti S, Faisal AA, 2018, FastOrient: lightweight computer vision for wrist control in assistive robotic grasping, The 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Publisher: IEEE

Wearable and Assistive robotics for human graspsupport are broadly either tele-operated robotic arms or actthrough orthotic control of a paralyzed user’s hand. Suchdevices require correct orientation for successful and efficientgrasping. In many human-robot assistive settings, the end-useris required to explicitly control the many degrees of freedommaking effective or efficient control problematic. Here we aredemonstrating the off-loading of low-level control of assistiverobotics and active orthotics, through automatic end-effectororientation control for grasping. This paper describes a compactalgorithm implementing fast computer vision techniques toobtain the orientation of the target object to be grasped, bysegmenting the images acquired with a camera positioned ontop of the end-effector of the robotic device. The rotation neededthat optimises grasping is directly computed from the object’sorientation. The algorithm has been evaluated in 6 differentscene backgrounds and end-effector approaches to 26 differentobjects. 94.8% of the objects were detected in all backgrounds.Grasping of the object was achieved in 91.1% of the casesand has been evaluated with a robot simulator confirming theperformance of the algorithm.

Conference paper

Cunningham J, Hapsari A, Guilleminot P, Shafti A, Faisal AAet al., 2018, The Supernumerary Robotic 3<sup>rd</sup> Thumb for Skilled Music Tasks

© 2018 IEEE. Wearable robotics bring the opportunity to augment human capability and performance, be it through prosthetics, exoskeletons, or supernumerary robotic limbs. The latter concept allows enhancing human performance and assisting them in daily tasks. An important research question is, however, whether the use of such devices can lead to their eventual cognitive embodiment, allowing the user to adapt to them and use them seamlessly as any other limb of their own. This paper describes the creation of a platform to investigate this. Our supernumerary robotic 3rd thumb was created to augment piano playing, allowing a pianist to press piano keys beyond their natural hand-span; thus leading to functional augmentation of their skills and the technical feasibility to play with 11 fingers. The robotic finger employs sensors, motors, and a human interfacing algorithm to control its movement in real-time. A proof of concept validation experiment has been conducted to show the effectiveness of the robotic finger in playing musical pieces on a grand piano, showing that naive users were able to use it for 11 finger play within a few hours.

Working paper

Maymó MR, Shafti A, Faisal AA, 2018, Fast Orient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping

© 2018 IEEE. Wearable and Assistive robotics for human grasp support are broadly either tele-operated robotic arms or act through orthotic control of a paralyzed user's hand. Such devices require correct orientation for successful and efficient grasping. In many human-robot assistive settings, the end-user is required to explicitly control the many degrees of freedom making effective or efficient control problematic. Here we are demonstrating the off-loading of low-level control of assistive robotics and active orthotics, through automatic end-effector orientation control for grasping. This paper describes a compact algorithm implementing fast computer vision techniques to obtain the orientation of the target object to be grasped, by segmenting the images acquired with a camera positioned on top of the end-effector of the robotic device. The rotation needed that optimises grasping is directly computed from the object's orientation. The algorithm has been evaluated in 6 different scene backgrounds and end-effector approaches to 26 different objects. 94.8% of the objects were detected in all backgrounds. Grasping of the object was achieved in 91.1% of the cases and has been evaluated with a robot simulator confirming the performance of the algorithm.

Working paper

Lin C-H, Faisal AA, 2018, Decomposing sensorimotor variability changes in ageing and their connection to falls in older people, Scientific Reports, Vol: 8, ISSN: 2045-2322

The relationship between sensorimotor variability and falls in older people has not been well investigated. We developed a novel task having shared biomechanics of obstacle negotiation to quantify sensorimotor variability related to locomotion across age. We found that sensorimotor variability in foot placement increases continuously with age. We then applied sensory psychophysics to pinpoint the visual and somatosensory systems associated with sensorimotor variability. We showed increased sensory variability, specifically increased proprioceptive variability, the vital cause of more variable foot placement in older people (greater than 65 years). Notably, older participants relied more on the vision to judge their own foot’s height compared to the young, suggesting a shift in multisensory integration strategy to compensate for degenerated proprioception. We further modelled the probability of tripping-over based on the relationship between sensorimotor variability and age and found a correspondence between model prediction and community-based data. We reveal increased sensorimotor variability, modulated by sensation precision, a potentially vital mechanism of raised tripping-over and thus fall events in older people. Analysis of sensorimotor variability and its specific components, may have the utility of fall risk and rehabilitation target evaluation.

Journal article

Shafti A, Orlov P, Faisal AA, 2018, Gaze-based, context-aware robotic system for assisted reaching and grasping

Assistive robotic systems endeavour to support those with movementdisabilities, enabling them to move again and regain functionality. Main issuewith these systems is the complexity of their low-level control, and how totranslate this to simpler, higher level commands that are easy and intuitivefor a human user to interact with. We have created a multi-modal system,consisting of different sensing, decision making and actuating modalities,leading to intuitive, human-in-the-loop assistive robotics. The system takesits cue from the user's gaze, to decode their intentions and implementlow-level motion actions to achieve high-level tasks. This results in the usersimply having to look at the objects of interest, for the robotic system toassist them in reaching for those objects, grasping them, and using them tointeract with other objects. We present our method for 3D gaze estimation, andgrammars-based implementation of sequences of action with the robotic system.The 3D gaze estimation is evaluated with 8 subjects, showing an overallaccuracy of $4.68\pm0.14cm$. The full system is tested with 5 subjects, showingsuccessful implementation of $100\%$ of reach to gaze point actions and fullimplementation of pick and place tasks in 96\%, and pick and pour tasks in$76\%$ of cases. Finally we present a discussion on our results and what futurework is needed to improve the system.

Working paper

Li M, Songur N, Orlov P, Leutenegger S, Faisal AAet al., 2018, Towards an embodied semantic fovea: Semantic 3D scene reconstruction from ego-centric eye-tracker videos

Incorporating the physical environment is essential for a completeunderstanding of human behavior in unconstrained every-day tasks. This isespecially important in ego-centric tasks where obtaining 3 dimensionalinformation is both limiting and challenging with the current 2D video analysismethods proving insufficient. Here we demonstrate a proof-of-concept systemwhich provides real-time 3D mapping and semantic labeling of the localenvironment from an ego-centric RGB-D video-stream with 3D gaze pointestimation from head mounted eye tracking glasses. We augment existing work inSemantic Simultaneous Localization And Mapping (Semantic SLAM) with collectedgaze vectors. Our system can then find and track objects both inside andoutside the user field-of-view in 3D from multiple perspectives with reasonableaccuracy. We validate our concept by producing a semantic map from images ofthe NYUv2 dataset while simultaneously estimating gaze position and gazeclasses from recorded gaze data of the dataset images.

Working paper

Subramanian M, Shafti A, Faisal A, Mechanomyography based closed-loop Functional Electrical Stimulation cycling system, BioRob 2018- IEEE International Conference on Biomedical Robotics and Biomechatronics

Conference paper

Auepanwiriyakul C, Harston A, Orlov P, Shafti A, Faisal AAet al., 2018, Semantic Fovea: Real-time annotation of ego-centric videos with gaze context, ACM Symposium on Eye Tracking Research and Applications (ETRA), Publisher: ASSOC COMPUTING MACHINERY

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00539811&limit=30&person=true