Imperial College London

Dr A. Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Reader in Neurotechnology
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

159 results found

Deisenroth MP, Faisal AA, Ong CS, 2020, Mathematics for Machine Learning, Publisher: Cambridge University Press, ISBN: 9781108455145

Book

Faisal A, Hermano K, Antonio P, 2019, Proceedings of the 3rd International Congress on Neurotechnology, Electronics and Informatics, Setúbal, Publisher: Scitepress, ISBN: 978-989-758-161-8

Book

Hermano K, Pedotti A, Faisal A, 2019, Proceedings of the 4th International Congress on Neurotechnology, Electronics and Informatics 2016, ISBN: 978-989-758-204-2

Book

Beyret B, Shafti SA, Faisal A, Dot-to-dot: explainable hierarchical reinforcement learning for robotic manipulation, IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1-6, ISSN: 2153-0866

Robotic systems are ever more capable of automationand fulfilment of complex tasks, particularly withreliance on recent advances in intelligent systems, deep learningand artificial intelligence in general. However, as robots andhumans come closer together in their interactions, the matterof interpretability, or explainability of robot decision-makingprocesses for the human grows in importance. A successfulinteraction and collaboration would only be possible throughmutual understanding of underlying representations of theenvironment and the task at hand. This is currently a challengein deep learning systems. We present a hierarchical deepreinforcement learning system, consisting of a low-level agenthandling the large actions/states space of a robotic systemefficiently, by following the directives of a high-level agent whichis learning the high-level dynamics of the environment and task.This high-level agent forms a representation of the world andtask at hand that is interpretable for a human operator. Themethod, which we call Dot-to-Dot, is tested on a MuJoCo-basedmodel of the Fetch Robotics Manipulator, as well as a ShadowHand, to test its performance. Results show efficient learningof complex actions/states spaces by the low-level agent, and aninterpretable representation of the task and decision-makingprocess learned by the high-level agent.

Conference paper

Khwaja M, Vaid SS, Zannone S, Harari GM, Faisal A, Matic Aet al., Modeling personality vs. modeling personalidad: In-the-wild mobile data analysis in five countries suggests cultural impact on personality models, Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, ISSN: 2474-9567

Sensor data collected from smartphones provides the possibility to passively infer a user’s personality traits. Such models canbe used to enable technology personalization, while contributing to our substantive understanding of how human behaviormanifests in daily life. A significant challenge in personality modeling involves improving the accuracy of personalityinferences, however, research has yet to assess and consider the cultural impact of users’ country of residence on modelreplicability. We collected mobile sensing data and self-reported Big Five traits from 166 participants (54 women and 112men) recruited in five different countries (UK, Spain, Colombia, Peru, and Chile) for 3 weeks. We developed machine learningbased personality models using culturally diverse datasets - representing different countries - and we show that such modelscan achieve state-of-the-art accuracy when tested in new countries, ranging from 63% (Agreeableness) to 71% (Extraversion)of classification accuracy. Our results indicate that using country-specific datasets can improve the classification accuracybetween 3% and 7% for Extraversion, Agreeableness, and Conscientiousness. We show that these findings hold regardless ofgender and age balance in the dataset. Interestingly, using gender- or age- balanced datasets as well as gender-separateddatasets improve trait prediction by up to 17%. We unpack differences in personality models across the five countries, highlightthe most predictive data categories (location, noise, unlocks, accelerometer), and provide takeaways to technologists andsocial scientists interested in passive personality assessment.

Journal article

Khwaja M, Ferrer M, Jesus I, Faisal A, Matic Aet al., Aligning daily activities with personality: towards a recommender system for improving wellbeing, ACM Conference on Recommender Systems (RecSys), Publisher: ACM

Recommender Systems have not been explored to a great extentfor improving health and subjective wellbeing. Recent advances inmobile technologies and user modelling present the opportunityfor delivering such systems, however the key issue is understand-ing the drivers of subjective wellbeing at an individual level. Inthis paper we propose a novel approach for deriving personalizedactivity recommendations to improve subjective wellbeing by maxi-mizing the congruence between activities and personality traits. Toevaluate the model, we leveraged a rich dataset collected in a smart-phone study, which contains three weeks of daily activity probes,the Big-Five personality questionnaire and subjective wellbeingsurveys. We show that the model correctly infers a range of activ-ities that are ’good’ or ’bad’ (i.e. that are positively or negativelyrelated to subjective wellbeing) for a given user and that the derivedrecommendations greatly match outcomes in the real-world.

Conference paper

Subramanian M, Songur N, Adjei D, Orlov P, Faisal Aet al., A.Eye Drive: gaze-based semi-autonomous wheelchair interface, 41st International Engineering in Medicine & Biology Society (EMBC 2019), Publisher: IEEE

Existing wheelchair control interfaces, such as sip & puff or screen based gaze-controlled cursors, are challenging for the severely disabled to navigate safely and independently as users continuously need tointeract with an interface during navigation. This putsa significant cognitive load on users and prevents them from interacting with the environment in other forms during navigation. We have combined eyetracking/gaze-contingent intention decoding with computervision context-awarealgorithms and autonomous navigation drawn fromself-driving vehicles to allow paralysed users to drive by eye, simply by decoding natural gaze about where the user wants to go: A.Eye Drive. Our “Zero UI” driving platform allows users to look and interact visually with at an objector destination of interest in their visual scene, and the wheelchairautonomously takes the user to the intended destination, while continuously updating the computed path for static and dynamic obstacles. This intention decoding technology empowers the end-user by promising more independence through their own agency.

Conference paper

Shafti SA, Orlov P, Faisal A, Gaze-based, Context-aware Robotic System for Assisted Reaching and Grasping, International Conference on Robotics and Automation 2019, Publisher: IEEE, ISSN: 2152-4092

Assistive robotic systems endeavour to support those with movement disabilities, enabling them to move againand regain functionality. Main issue with these systems is the complexity of their low-level control, and how to translate thisto simpler, higher level commands that are easy and intuitivefor a human user to interact with. We have created a multi-modal system, consisting of different sensing, decision makingand actuating modalities, to create intuitive, human-in-the-loopassistive robotics. The system takes its cue from the user’s gaze,to decode their intentions and implement lower-level motionactions and achieve higher level tasks. This results in the usersimply having to look at the objects of interest, for the robotic system to assist them in reaching for those objects, grasping them, and using them to interact with other objects. We presentour method for 3D gaze estimation, and action grammars-basedimplementation of sequences of action through the robotic system. The 3D gaze estimation is evaluated with 8 subjects,showing an overall accuracy of 4.68±0.14cm. The full systemis tested with 5 subjects, showing successful implementation of 100% of reach to gaze point actions and full implementationof pick and place tasks in 96%, and pick and pour tasks in76% of cases. Finally we present a discussion on our results and what future work is needed to improve the system.

Conference paper

Haar S, van Assel CM, Faisal AA, 2019, Neurobehavioural signatures of learning that emerge in a real-world motor skill task

<jats:title>Summary</jats:title><jats:p>The behavioral and neural processes of real-world motor learning remain largely unknown. We demonstrate the feasibility of real-world neuroscience, using wearables for naturalistic full-body motion tracking and mobile brain imaging, to study motor learning in billiards. We highlight the similarities between motor learning in-the-wild and classic toy-tasks in well-known features, such as multiple learning rates, and the relationship between task-related variability and motor learning. However, we found that real-world motor learning affects the whole body, changing motor control from head to toe. Moreover, with a data-driven approach, based on the relationship between variability and learning, we found the arm supination to be the task relevant joint angle. Our EEG recordings highlight groups of subjects with opposing dynamics of post-movement Beta rebound (PMBR), not resolved before in toy-tasks. The first group increased PMBR over learning while the second decreased. These opposite trends were previously reported in error-based learning and skill learning tasks respectively. Behaviorally, the PMBR decreasers better controlled task-relevant variability dynamically leading to lower variability and smaller errors in the learning plateau. We speculate that these PMBR dynamics emerge because subjects must combine multi-modal mechanisms of learning in new ways when faced with the complexity of the real-world.</jats:p>

Journal article

Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal AAet al., 2019, Understanding the artificial intelligence clinician and optimal treatment strategies for sepsis in intensive care

In this document, we explore in more detail our published work (Komorowski,Celi, Badawi, Gordon, & Faisal, 2018) for the benefit of the AI in Healthcareresearch community. In the above paper, we developed the AI Clinician system,which demonstrated how reinforcement learning could be used to make usefulrecommendations towards optimal treatment decisions from intensive care data.Since publication a number of authors have reviewed our work (e.g. Abbasi,2018; Bos, Azoulay, & Martin-Loeches, 2019; Saria, 2018). Given the differenceof our framework to previous work, the fact that we are bridging two verydifferent academic communities (intensive care and machine learning) and thatour work has impact on a number of other areas with more traditionalcomputer-based approaches (biosignal processing and control, biomedicalengineering), we are providing here additional details on our recentpublication.

Working paper

Gottesman O, Johansson F, Komorowski M, Faisal A, Sontag D, Doshi-Velez F, Celi LAet al., 2019, Guidelines for reinforcement learning in healthcare, Nature Medicine, Vol: 25, Pages: 16-18, ISSN: 1078-8956

In this Comment, we provide guidelines for reinforcement learning for decisions about patient treatment that we hope will accelerate the rate at which observational cohorts can inform healthcare practice in a safe, risk-conscious manner.

Journal article

Peng X, Ding Y, Wihl D, Gottesman O, Komorowski M, Lehman L-WH, Ross A, Faisal A, Doshi-Velez Fet al., 2018, Improving Sepsis Treatment Strategies by Combining Deep and Kernel-Based Reinforcement Learning., AMIA 2018 Annual Symposium, Pages: 887-896

Sepsis is the leading cause of mortality in the ICU. It is challenging to manage because individual patients respond differently to treatment. Thus, tailoring treatment to the individual patient is essential for the best outcomes. In this paper, we take steps toward this goal by applying a mixture-of-experts framework to personalize sepsis treatment. The mixture model selectively alternates between neighbor-based (kernel) and deep reinforcement learning (DRL) experts depending on patient's current history. On a large retrospective cohort, this mixture-based approach outperforms physician, kernel only, and DRL-only experts.

Conference paper

Liu Y, Gottesman O, Raghu A, Komorowski M, Faisal AA, Doshi-Velez F, Brunskill Eet al., 2018, Representation Balancing MDPs for Off-Policy Policy Evaluation, Thirty-second Annual Conference on Neural Information Processing Systems (NIPS)

We study the problem of off-policy policy evaluation (OPPE) in RL. In contrastto prior work, we consider how to estimate both the individual policy value and average policy value accurately. We draw inspiration from recent work in causal reasoning, and propose a new finite sample generalization error bound for value estimates from MDP models. Using this upper bound as an objective, we develop a learning algorithm of an MDP model with a balanced representation, and show that our approach can yield substantially lower MSE in a common synthetic domain and on a challenging real-world sepsis management problem.

Conference paper

Parbhoo S, Gottesman O, Ross AS, Komorowski M, Faisal A, Bon I, Roth V, Doshi-Velez Fet al., 2018, Improving counterfactual reasoning with kernelised dynamic mixing models, PLoS ONE, Vol: 13, ISSN: 1932-6203

Simulation-based approaches to disease progression allow us to make counterfactual predictions about the effects of an untried series of treatment choices. However, building accurate simulators of disease progression is challenging, limiting the utility of these approaches for real world treatment planning. In this work, we present a novel simulation-based reinforcement learning approach that mixes between models and kernel-based approaches to make its forward predictions. On two real world tasks, managing sepsis and treating HIV, we demonstrate that our approach both learns state-of-the-art treatment policies and can make accurate forward predictions about the effects of treatments on unseen patients.

Journal article

Komorowski M, Celi LA, Badawi O, Gordon AC, Faisal Aet al., 2018, The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care, Nature Medicine, Vol: 24, Pages: 1716-1720, ISSN: 1078-8956

Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals1–3, but the best treatment strategy remains uncertain. In particular, evidence suggests that current practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients1,4–6. To tackle this sequential decision-making problem, we developed a reinforcement learning agent, the artificial intelligence (AI) Clinician, which learns from data to predict patient dynamics given specific treatment decisions. Our agent extracted implicit knowledge from an amount of patient data that exceeds many-fold the life-time experience of human clinicians and learned optimal treatment by having analysed myriads of (mostly sub-optimal) treatment decisions. We demonstrate that the value of the AI Clinician’s selected treatment is on average reliably higher than the human clinicians. In a large validation cohort independent from the training data, mortality was lowest in patients where clinicians’ actual doses matched the AI policy. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.

Journal article

Ortega San Miguel P, Colas C, Faisal A, 2018, Compact convolutional neural networks for multi-class, personalised, Closed-loop EEG-BCI, 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2018), Publisher: IEEE

For many people suffering from motor disabilities,assistive devices controlled with only brain activity are theonly way to interact with their environment [1]. Naturaltasks often require different kinds of interactions, involvingdifferent controllers the user should be able to select in aself-paced way. We developed a Brain-Computer Interface(BCI) allowing users to switch between four control modesin a self-paced way in real-time. Since the system is devisedto be used in domestic environments in a user-friendly way,we selected non-invasive electroencephalographic (EEG) signalsand convolutional neural networks (CNNs), known for theirability to find the optimal features in classification tasks. Wetested our system using the Cybathlon BCI computer game,which embodies all the challenges inherent to real-time control.Our preliminary results show that an efficient architecture(SmallNet), with only one convolutional layer, can classify 4mental activities chosen by the user. The BCI system is run andvalidated online. It is kept up-to-date through the use of newlycollected signals along playing, reaching an online accuracyof47.6%where most approaches only report results obtainedoffline. We found that models trained with data collected onlinebetter predicted the behaviour of the system in real-time. Thissuggests that similar (CNN based) offline classifying methodsfound in the literature might experience a drop in performancewhen applied online. Compared to our previous decoder ofphysiological signals relying on blinks, we increased by a factor2 the amount of states among which the user can transit,bringing the opportunity for finer control of specific subtaskscomposing natural grasping in a self-paced way. Our resultsare comparable to those showed at the Cybathlon’s BCI Racebut further imp

Conference paper

Woods B, Subramanian M, Shafti A, Faisal AAet al., 2018, Mecbanomyograpby based closed-loop functional electrical stimulation cycling system, 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Publisher: IEEE, Pages: 179-184, ISSN: 2155-1774

Functional Electrical Stimulation (FES) systems are successful in restoring motor function and supporting paralyzed users. Commercially available FES products are open loop, meaning that the system is unable to adapt to changing conditions with the user and their muscles which results in muscle fatigue and poor stimulation protocols. This is because it is difficult to close the loop between stimulation and monitoring of muscle contraction using adaptive stimulation. FES causes electrical artefacts which make it challenging to monitor muscle contractions with traditional methods such as electromyography (EMG). We look to overcome this limitation by combining FES with novel mechanomyographic (MMG) sensors to be able to monitor muscle activity during stimulation in real time. To provide a meaningful task we built an FES cycling rig with a software interface that enabled us to perform adaptive recording and stimulation, and then combine this with sensors to record forces applied to the pedals using force sensitive resistors (FSRs); crank angle position using a magnetic incremental encoder and inputs from the user using switches and a potentiometer. We illustrated this with a closed-loop stimulation algorithm that used the inputs from the sensors to control the output of a programmable RehaStim 1 FES stimulator (Hasomed) in real-time. This recumbent bicycle rig was used as a testing platform for FES cycling. The algorithm was designed to respond to a change in requested speed (RPM) from the user and change the stimulation power (% of maximum current mA) until this speed was achieved and then maintain it.

Conference paper

Cunningham J, Hapsari A, Guilleminot P, Shafti A, Faisal AAet al., 2018, The Supernumerary Robotic 3<sup>rd</sup> Thumb for Skilled Music Tasks

© 2018 IEEE. Wearable robotics bring the opportunity to augment human capability and performance, be it through prosthetics, exoskeletons, or supernumerary robotic limbs. The latter concept allows enhancing human performance and assisting them in daily tasks. An important research question is, however, whether the use of such devices can lead to their eventual cognitive embodiment, allowing the user to adapt to them and use them seamlessly as any other limb of their own. This paper describes the creation of a platform to investigate this. Our supernumerary robotic 3rd thumb was created to augment piano playing, allowing a pianist to press piano keys beyond their natural hand-span; thus leading to functional augmentation of their skills and the technical feasibility to play with 11 fingers. The robotic finger employs sensors, motors, and a human interfacing algorithm to control its movement in real-time. A proof of concept validation experiment has been conducted to show the effectiveness of the robotic finger in playing musical pieces on a grand piano, showing that naive users were able to use it for 11 finger play within a few hours.

Working paper

Maymó MR, Shafti A, Faisal AA, 2018, Fast Orient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping

© 2018 IEEE. Wearable and Assistive robotics for human grasp support are broadly either tele-operated robotic arms or act through orthotic control of a paralyzed user's hand. Such devices require correct orientation for successful and efficient grasping. In many human-robot assistive settings, the end-user is required to explicitly control the many degrees of freedom making effective or efficient control problematic. Here we are demonstrating the off-loading of low-level control of assistive robotics and active orthotics, through automatic end-effector orientation control for grasping. This paper describes a compact algorithm implementing fast computer vision techniques to obtain the orientation of the target object to be grasped, by segmenting the images acquired with a camera positioned on top of the end-effector of the robotic device. The rotation needed that optimises grasping is directly computed from the object's orientation. The algorithm has been evaluated in 6 different scene backgrounds and end-effector approaches to 26 different objects. 94.8% of the objects were detected in all backgrounds. Grasping of the object was achieved in 91.1% of the cases and has been evaluated with a robot simulator confirming the performance of the algorithm.

Working paper

Lin C-H, Faisal AA, 2018, Decomposing sensorimotor variability changes in ageing and their connection to falls in older people, Scientific Reports, Vol: 8, ISSN: 2045-2322

The relationship between sensorimotor variability and falls in older people has not been well investigated. We developed a novel task having shared biomechanics of obstacle negotiation to quantify sensorimotor variability related to locomotion across age. We found that sensorimotor variability in foot placement increases continuously with age. We then applied sensory psychophysics to pinpoint the visual and somatosensory systems associated with sensorimotor variability. We showed increased sensory variability, specifically increased proprioceptive variability, the vital cause of more variable foot placement in older people (greater than 65 years). Notably, older participants relied more on the vision to judge their own foot’s height compared to the young, suggesting a shift in multisensory integration strategy to compensate for degenerated proprioception. We further modelled the probability of tripping-over based on the relationship between sensorimotor variability and age and found a correspondence between model prediction and community-based data. We reveal increased sensorimotor variability, modulated by sensation precision, a potentially vital mechanism of raised tripping-over and thus fall events in older people. Analysis of sensorimotor variability and its specific components, may have the utility of fall risk and rehabilitation target evaluation.

Journal article

Shafti A, Orlov P, Faisal AA, 2018, Gaze-based, context-aware robotic system for assisted reaching and grasping

Assistive robotic systems endeavour to support those with movementdisabilities, enabling them to move again and regain functionality. Main issuewith these systems is the complexity of their low-level control, and how totranslate this to simpler, higher level commands that are easy and intuitivefor a human user to interact with. We have created a multi-modal system,consisting of different sensing, decision making and actuating modalities,leading to intuitive, human-in-the-loop assistive robotics. The system takesits cue from the user's gaze, to decode their intentions and implementlow-level motion actions to achieve high-level tasks. This results in the usersimply having to look at the objects of interest, for the robotic system toassist them in reaching for those objects, grasping them, and using them tointeract with other objects. We present our method for 3D gaze estimation, andgrammars-based implementation of sequences of action with the robotic system.The 3D gaze estimation is evaluated with 8 subjects, showing an overallaccuracy of $4.68\pm0.14cm$. The full system is tested with 5 subjects, showingsuccessful implementation of $100\%$ of reach to gaze point actions and fullimplementation of pick and place tasks in 96\%, and pick and pour tasks in$76\%$ of cases. Finally we present a discussion on our results and what futurework is needed to improve the system.

Working paper

Li M, Songur N, Orlov P, Leutenegger S, Faisal AAet al., 2018, Towards an embodied semantic fovea: Semantic 3D scene reconstruction from ego-centric eye-tracker videos

Incorporating the physical environment is essential for a completeunderstanding of human behavior in unconstrained every-day tasks. This isespecially important in ego-centric tasks where obtaining 3 dimensionalinformation is both limiting and challenging with the current 2D video analysismethods proving insufficient. Here we demonstrate a proof-of-concept systemwhich provides real-time 3D mapping and semantic labeling of the localenvironment from an ego-centric RGB-D video-stream with 3D gaze pointestimation from head mounted eye tracking glasses. We augment existing work inSemantic Simultaneous Localization And Mapping (Semantic SLAM) with collectedgaze vectors. Our system can then find and track objects both inside andoutside the user field-of-view in 3D from multiple perspectives with reasonableaccuracy. We validate our concept by producing a semantic map from images ofthe NYUv2 dataset while simultaneously estimating gaze position and gazeclasses from recorded gaze data of the dataset images.

Working paper

Subramanian M, Shafti A, Faisal A, Mechanomyography based closed-loop Functional Electrical Stimulation cycling system, BioRob 2018- IEEE International Conference on Biomedical Robotics and Biomechatronics

Conference paper

Auepanwiriyakul C, Harston A, Orlov P, Shafti A, Faisal AAet al., 2018, Semantic Fovea: Real-time annotation of ego-centric videos with gaze context, ACM Symposium on Eye Tracking Research and Applications (ETRA), Publisher: ASSOC COMPUTING MACHINERY

Conference paper

Orlov P, Shafti A, Auepanwiriyakul C, Songur N, Faisal AAet al., 2018, A Gaze-Contingent Intention Decoding Engine for human augmentation, ACM Symposium on Eye Tracking Research and Applications (ETRA), Publisher: ASSOC COMPUTING MACHINERY

Conference paper

Li L, Komorowski M, Faisal AA, 2018, The actor search tree critic (ASTC) for off-policy POMDP learning in medical decision making

Off-policy reinforcement learning enables near-optimal policy from suboptimalexperience, thereby provisions opportunity for artificial intelligenceapplications in healthcare. Previous works have mainly framed patient-clinicianinteractions as Markov decision processes, while true physiological states arenot necessarily fully observable from clinical data. We capture this situationwith partially observable Markov decision process, in which an agent optimisesits actions in a belief represented as a distribution of patient statesinferred from individual history trajectories. A Gaussian mixture model isfitted for the observed data. Moreover, we take into account the fact thatnuance in pharmaceutical dosage could presumably result in significantlydifferent effect by modelling a continuous policy through a Gaussianapproximator directly in the policy space, i.e. the actor. To address thechallenge of infinite number of possible belief states which renders exactvalue iteration intractable, we evaluate and plan for only every encounteredbelief, through heuristic search tree by tightly maintaining lower and upperbounds of the true value of belief. We further resort to functionapproximations to update value bounds estimation, i.e. the critic, so that thetree search can be improved through more compact bounds at the fringe nodesthat will be back-propagated to the root. Both actor and critic parameters arelearned via gradient-based approaches. Our proposed policy trained from realintensive care unit data is capable of dictating dosing on vasopressors andintravenous fluids for sepsis patients that lead to the best patient outcomes.

Working paper

Ruiz Maymo M, Shafti S, Faisal AA, FastOrient: lightweight computer vision for wrist control in assistive robotic grasping, The 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Publisher: IEEE

Wearable and Assistive robotics for human graspsupport are broadly either tele-operated robotic arms or actthrough orthotic control of a paralyzed user’s hand. Suchdevices require correct orientation for successful and efficientgrasping. In many human-robot assistive settings, the end-useris required to explicitly control the many degrees of freedommaking effective or efficient control problematic. Here we aredemonstrating the off-loading of low-level control of assistiverobotics and active orthotics, through automatic end-effectororientation control for grasping. This paper describes a compactalgorithm implementing fast computer vision techniques toobtain the orientation of the target object to be grasped, bysegmenting the images acquired with a camera positioned ontop of the end-effector of the robotic device. The rotation neededthat optimises grasping is directly computed from the object’sorientation. The algorithm has been evaluated in 6 differentscene backgrounds and end-effector approaches to 26 differentobjects. 94.8% of the objects were detected in all backgrounds.Grasping of the object was achieved in 91.1% of the casesand has been evaluated with a robot simulator confirming theperformance of the algorithm.

Conference paper

Cunningham J, Hapsari A, Guilleminot P, Shafti S, Faisal AAet al., The supernumerary robotic 3rd thumb for skilled music tasks, The 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Publisher: IEEE

Wearable robotics bring the opportunity to augmenthuman capability and performance, be it through prosthetics,exoskeletons, or supernumerary robotic limbs. The latterconcept allows enhancing human performance and assistingthem in daily tasks. An important research question is, however,whether the use of such devices can lead to their eventualcognitive embodiment, allowing the user to adapt to them anduse them seamlessly as any other limb of their own. This paperdescribes the creation of a platform to investigate this. Oursupernumerary robotic 3rd thumb was created to augment pianoplaying, allowing a pianist to press piano keys beyond theirnatural hand-span; thus leading to functional augmentation oftheir skills and the technical feasibility to play with 11 fingers.The robotic finger employs sensors, motors, and a humaninterfacing algorithm to control its movement in real-time. Aproof of concept validation experiment has been conducted toshow the effectiveness of the robotic finger in playing musicalpieces on a grand piano, showing that naive users were able touse it for 11 finger play within a few hours.

Conference paper

Lluch Hernandez A, Hernando Melia C, 2018, Foreword, FUTURE ONCOLOGY, Vol: 14, ISSN: 1479-6694

Journal article

Ortega P, Colas C, Faisal A, 2018, Convolutional neural network, personalised, closed-loop Brain-Computer Interfaces for multi-way control mode switching in real-time, Publisher: Cold Spring Harbor Laboratory

<jats:title>Abstract</jats:title><jats:p>Exoskeletons and robotic devices are for many motor disabled people the only way to interact with their envi-ronment. Our lab previously developed a gaze guided assistive robotic system for grasping. It is well known that the same natural task can require different interactions described by different dynamical systems that would require different robotic controllers and their selection by the user in a self paced way. Therefore, we investigated different ways to achieve transitions between multiple states, finding that eye blinks were the most reliable to transition from ‘off’ to ‘control’ modes (binary classification) compared to voice and electromyography. In this paper be expanded on this work by investigating brain signals as sources for control mode switching. We developed a Brain Computer Interface (BCI) that allows users to switch between four control modes in self paced way in real time. Since the system is devised to be used in domestic environments in a user friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (ConvNets), known by their capability to find the optimal features for a classification task, which we hypothesised would add flexibility to the system in terms of which mental activities the user could perform to control it. We tested our system using the Cybathlon BrainRunners computer game, which represents all the challenges inherent to real time control. Our preliminary results show that an efficient architecture (SmallNet) composed by a convolutional layer, a fully connected layer and a sigmoid classification layer, is able to classify 4 mental activities that the user chose to perform. For his preferred mental activities, we run and validated the system online and retrained the system using online collected EEG data. We achieved 47, 6% accuracy in online operation in the 4-way classification task. In part

Working paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00539811&limit=30&person=true