Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

233 results found

Dziemian S, Abbott WW, Aldo Faisal A, 2016, Gaze-based teleprosthetic enables intuitive continuous control of complex robot arm use: Writing & drawing, 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Publisher: IEEE, Pages: 1277-1282, ISSN: 2155-1774

Eye tracking is a powerful mean for assistive technologies for people with movement disorders, paralysis and amputees. We present a highly intuitive eye tracking-controlled robot arm operating in 3-dimensional space based on the user's gaze target point that enables tele-writing and drawing. The usability and intuitive usage was assessed by a “tele” writing experiment with 8 subjects that learned to operate the system within minutes of first time use. These subjects were naive to the system and the task and had to write three letters on a white board with a white board pen attached to the robot arm's endpoint. The instructions are to imagine they were writing text with the pen and look where the pen would be going, they had to write the letters as fast and as accurate as possible, given a letter size template. Subjects were able to perform the task with facility and accuracy, and movements of the arm did not interfere with subjects ability to control their visual attention so as to enable smooth writing. On the basis of five consecutive trials there was a significant decrease in the total time used and the total number of commands sent to move the robot arm from the first to the second trial but no further improvement thereafter, suggesting that within writing 6 letters subjects had mastered the ability to control the system. Our work demonstrates that eye tracking is a powerful means to control robot arms in closed-loop and real-time, outperforming other invasive and non-invasive approaches to Brain-Machine-Interfaces in terms of calibration time (<;2 minutes), training time (<;10 minutes), interface technology costs. We suggests that gaze-based decoding of action intention may well become one of the most efficient ways to interface with robotic actuators - i.e. Brain-Robot-Interfaces - and become useful beyond paralysed and amputee users also for the general teleoperation of robotic and exoskeleton in human augmentation.

Conference paper

Behbahani FMP, Singla–Buxarrais G, Faisal AA, 2016, Haptic SLAM: An Ideal Observer Model for Bayesian Inference of Object Shape and Hand Pose from Contact Dynamics, Eurohaptics 2016, Publisher: Springer International Publishing, Pages: 146-157, ISSN: 0302-9743

Dynamic tactile exploration enables humans to seamlessly estimate the shape of objects and distinguish them from one another in the complete absence of visual information. Such a blind tactile exploration allows integrating information of the hand pose and contacts on the skin to form a coherent representation of the object shape. A principled way to understand the underlying neural computations of human haptic perception is through normative modelling. We propose a Bayesian perceptual model for recursive integration of noisy proprioceptive hand pose with noisy skin–object contacts. The model simultaneously forms an optimal estimate of the true hand pose and a representation of the explored shape in an object–centred coordinate system. A classification algorithm can, thus, be applied in order to distinguish among different objects solely based on the similarity of their representations. This enables the comparison, in real–time, of the shape of an object identified by human subjects with the shape of the same object predicted by our model using motion capture data. Therefore, our work provides a framework for a principled study of human haptic exploration of complex objects.

Conference paper

Konnaris C, Thomik AAC, Aldo Faisal A, 2016, Sparse Eigenmotions derived from daily life kinematics implemented on a dextrous robotic hand, 6th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob), Publisher: IEEE, Pages: 1358-1363, ISSN: 2155-1774

Our hands are considered one of the most complex to control actuated systems, thus, emulating the manipulative skills of real hands is still an open challenge even in anthropomorphic robotic hand. While the action of the 4 long fingers and simple grasp motions through opposable thumbs have been successfully implemented in robotic designs, complex in-hand manipulation of objects was difficult to achieve. We take an approach grounded in data-driven extraction of control primitives from natural human behaviour to develop novel ways to understand the dexterity of hands. We collected hand kinematics datasets from natural, unconstrained human behaviour of daily life in 8 healthy in a studio flat environment. We then applied our Sparse Motion Decomposition approach to extract spatio-temporally localised modes of hand motion that are both time-scale and amplitude-scale invariant. These Sparse EigenMotions (SEMs)[1] form a sparse symbolic code that encodes continuous hand motions. We mechanically implemented the common SEMs on our novel dexterous robotic hand [2] in open-loop control. We report that without processing any feedback during grasp control, several of the SEMs resulted in stable grasps of different daily life objects. The finding that SEMs extracted from daily life implement stable grasps in open-loop control of dexterous hands, lends further support for our hypothesis the brain controls the hand using sparse control strategies.

Conference paper

Marcos Tostado P, Abbott WW, Faisal AA, 2016, 3D gaze cursor: continuous calibration and end-point grasp control of robotic actuators, IEEE International Conference on Robotics and Automation (ICRA), Publisher: IEEE, Pages: 3295-3300

Eye movements are closely related to motor ac-tions, and hence can be used to infer motor intentions. Ad-ditionally, eye movements are in some cases the only meansof communication and interaction with the environment forparalysed and impaired patients with severe motor deficiencies.Despite this, eye-tracking technology still has a very limiteduse as a human-robot control interface and its applicability ishighly restricted to 2D simple tasks that operate on screen basedinterfaces and do not suffice for natural physical interactionwith the environment. We propose that decoding the gazeposition in 3D space rather than in 2D results into a muchricher "spatial cursor" signal that allows users to performeveryday tasks such as grasping and moving objects via gaze-based robotic teleoperation. Eye tracking in 3D calibration isusually slow – we demonstrate here that by using a full 3Dtrajectory for system calibration generated by a robotic armrather than a simple grid of discrete points, gaze calibration inthe 3 dimensions can be successfully achieved in short time andwith high accuracy. We perform the non-linear regression fromeye-image to 3D-end point using Gaussian Process regressors,which allows us to handle uncertainty in end-point estimatesgracefully. Our telerobotic system uses a multi-joint robot armwith a gripper and is integrated with our in-house "GT3D"binocular eye tracker. This prototype system has been evaluatedand assessed in a test environment with 7 users, yielding gaze-estimation errors of less than 1cm in the horizontal, vertical anddepth dimensions, and less than 2cm in the overall 3D Euclideanspace. Users reported intuitive, low-cognitive load, control of thesystem right from their first trial and were straightaway ableto simply look at an object and command through a

Conference paper

Bergsma A, Lobo-Prat J, Vroom E, Furlong P, Herder JL, Corrigan M, de Groot I, Faisal A, Goemans N, Han J, Herder J, Iodice M, Kennedy A, Koopman B, Prat JL, Main M, Mathie B, Muntoni F, Castro MN, Paalman M, Porter J, Rahman T, Schneider J, Stienen A, Verstegen P, Walsh Cet al., 2016, 1st Workshop on Upper-Extremity Assistive Technology for People with Duchenne: State of the art, emerging avenues, and challenges: April 27th 2015, London, United Kingdom, Neuromuscular Disorders, Vol: 26, Pages: 386-393, ISSN: 0960-8966

Journal article

Faisal AA, 2016, Action Grammars - Extraction, recognition and prediction of movement primitives in tool-making, 85th Annual Meeting of the American-Association-of-Physical-Anthropologists, Publisher: WILEY-BLACKWELL, Pages: 141-141, ISSN: 0002-9483

Conference paper

Lorenz R, Monti RP, Ribeiro Violante I, Anagnostopoulos C, Faisal AA, Montana G, Leech Ret al., 2016, The Automatic Neuroscientist: A framework for optimizing experimentaldesign with closed-loop real-time fMRI, Neuroimage, Vol: 129, Pages: 320-334, ISSN: 1095-9572

Functional neuroimaging typically explores how a particular task activates a set of brain regions. Importantly though, the same neural system can be activated by inherently different tasks. To date, there is no approach available that systematically explores whether and how distinct tasks probe the same neural system. Here, we propose and validate an alternative framework, the Automatic Neuroscientist, which turns the standard fMRI approach on its head. We use real-time fMRI in combination with modern machine-learning techniques to automatically design the optimal experiment to evoke a desired target brain state. In this work, we present two proof-of-principle studies involving perceptual stimuli. In both studies optimization algorithms of varying complexity were employed; the first involved a stochastic approximation method while the second incorporated a more sophisticated Bayesian optimization technique. In the first study, we achieved convergence for the hypothesized optimum in 11 out of 14 runs in less than 10 min. Results of the second study showed how our closed-loop framework accurately and with high efficiency estimated the underlying relationship between stimuli and neural responses for each subject in one to two runs: with each run lasting 6.3 min. Moreover, we demonstrate that using only the first run produced a reliable solution at a group-level. Supporting simulation analyses provided evidence on the robustness of the Bayesian optimization approach for scenarios with low contrast-to-noise ratio. This framework is generalizable to numerous applications, ranging from optimizing stimuli in neuroimaging pilot studies to tailoring clinical rehabilitation therapy to patients and can be used with multiple imaging modalities in humans and animals.

Journal article

Faisal A, Krebs HI, Pedotti A, 2016, Hands on neurotechnology, Pages: VII-VIII, ISBN: 9789897582042

Book chapter

Rodríguez M, Sylaidi A, Faisal AA, 2016, An fMRI-Compatible System for 3DOF Motion Tracking of Objects in Haptic Motor Control Studies, Advances in Neurotechnology, Electronics and Informatics, Editors: Londral, Encarnação, Publisher: Springer International Publishing, Pages: 115-123, ISBN: 978-3-319-26240-6

Book chapter

Lourenço PR, Abbott WW, Faisal AA, 2016, Supervised EEG ocular artefact correction through eye-tracking, Advances in Neurotechnology, Electronics and Informatics, Editors: Londral, Encarnação, Publisher: Springer International Publishing, Pages: 99-113, ISBN: 978-3-319-26240-6

Electroencephalography (EEG) is a widely used brain signal recording technique with many uses. The information conveyed in these recordings is a useful tool in the diagnosis of some diseases and disturbances, basic science, as well as in the development of non-invasive Brain-Machine Interfaces (BMI). However, the electrical recording setup comes with two major downsides, a. poor signal-to-noise ratio and b. the vulnerability to any external and internal noise sources. One of the main sources of artefacts is eye movements due to the electric dipole between the cornea and the retina. We have previously proposed that monitoring eye-movements provides a complementary signal for BMIs. Here we propose a novel technique to remove eye-related artefacts from the EEG recordings. We coupled Eye Tracking with EEG allowing us to independently measure when ocular artefact events occur through the eye tracker and thus clean them up in a targeted “supervised” manner instead of using a “blind” artefact clean up correction technique. Three standard methods of artefact correction were applied in an event-driven, supervised manner: 1. Independent Components Analysis (ICA), 2. Wiener Filter and 3. Wavelet Decomposition and compared to “blind” unsupervised ICA clean up. These are standard artefact correction approaches implemented in many toolboxes and experimental EEG systems and could easily be applied by their users in an event-driven manner. Already the qualitative inspection of the clean up traces shows that the simple targeted artefact event-driven clean up outperforms the traditional “blind” clean up approaches. We conclude that this justifies the small extra effort of performing simultaneous eye tracking with any EEG recording to enable simple, but targeted, automatic artefact removal that preserves more of the original signal.

Book chapter

Lorenz R, Monti RP, Hampshire A, Koush Y, Anagnostopoulos C, Faisal AA, Sharp D, Montana G, Leech R, Violante IRet al., 2016, Towards tailoring non-invasive brain stimulation using real-time fMRI and Bayesian optimization, 6th International Workshop on Pattern Recognition in Neuroimaging (PRNI), Publisher: IEEE, Pages: 49-52, ISSN: 2330-9989

Conference paper

Faisal AA, 2015, Mind meets machine, Science, Vol: 350, Pages: 746-746, ISSN: 0036-8075

Journal article

Lin C, Faisal AA, 2015, Robotic psychophysics rig to assess, diagnose and rehabilitate theneurological causes of falls in the elderly, IEEE Engineering in Medicine and Biology (EMBC)

Falls are the leading causes of unintentional injuriesin the elderly and thus a pose a major hazard toour ageing society. We present the FOHEPO (FOot HEightPOsitioning) system to measure, diagnose and rehabilitateageing-related and neurological causes of falls. We hypothesisethat both perceptual and motor variability is likely to increasewith age and may lead to imprecise movements causing tripovers, the major triggers of falls. Here we propose a roboticsystem that automatically performs measures of fall-relatedperceptual and motor variability in elderly subjects. OurFOHEPO platform enables us to measure and track differentsources of noise in the nervous system: visual perception noise ofobstacle height, proprioceptive noise of localising raising one’sfoot to a desired height, noise in the visual feedback of thefoot movements. The platform should eventually provide us themeans to estimate fall probabilities by using these measures.Crucially, we can use the same system in game-field settings torehabilitate elderly users moving with larger safety factors soas to reduce their risks of trip-over.

Conference paper

Gavriel C, Thomik A, Lourenco PR, Nageshwaran S, Athanasopoulos S, Sylaidi A, Festenstein R, Faisal Aet al., 2015, Kinematic body sensor networks and behaviourmetrics for objective efficacy measurements in neurodegenerative disease drug trials, IEEE Body Sensor Networks Conference 2015

In this study, we have deployed body sensor network (BSN) technology in clinical trials for monitoring and quantifying the behaviour of Friedreich's Ataxia (FRDA) patients on a longitudinal scale. Using our ETHO1 wireless BSN nodes, we captured motion time-series from patients' sleep and we extracted behavioural biomarkers that can objectively highlight the progression of the disease throughout time. The clinical scales that are currently used to capture the stage of the ataxic disease require patients to perform a series of lengthy tasks where clinicians can observe patients' performance and aggregate a score that represents the stage of the disease. Unfortunately, these scales have been shown to be inconsistent mainly due to the underlying subjective measures, they are highly dependent on the assessor's experience and they also have low sensitivity that fails to capture the slow disease progression in short periods of time. This entails lengthy clinical trials for monitoring the effects of any drugs on patients that has huge effects on the cost of medical healthcare. Using the data we collected from our clinical trials, we extracted behavioural biomarkers based on the distribution of patients' movement and stillness durations in bed during their sleep and also the intensity of their movements. Our biomarkers exhibit trends similar to patients' SARA scores, one of the standard clinical scales used for capturing the disease progression in FRDA patients. This establishes a proof of concept that BSN technology can objectively capture patients' behaviour and can be used to perform rapid efficacy of clinical measurements on the effects of drugs. Additionally, understanding the underlying effects of Friedreich's Ataxia on our motor control system can potentially enable detection of the disease at a very early stage.

Conference paper

Gavriel C, Parker K, Faisal A, 2015, Smartphone as an ultra-low cost medical tricorder for real-time cardiological measurements via ballistocardiography, IEEE Body Sensor Network Conference 2015

In this preliminary study, we investigate the potential use of smartphones as portable heart-monitoring devices that can capture and analyse heart activity in real time. We have developed a smartphone application called "Medical Tricorder" that can exploit smartphone's inertial sensors and when placed on a subject's chest, it can efficiently capture the motion patterns caused by the mechanical activity of the heart. Using the measured ballistocardiograph signal (BCG), the application can efficiently extract the heart rate in real time while matching the performance of clinical-grade electrocardiographs (ECG). Although the BCG signal can provide much richer information regarding the mechanical aspects of the human heart, we have developed a method of mapping the chest BCG signal into an ECG signal, which can be made directly available to clinicians for diagnostics. Comparing the estimated ECG signal to empirical data from cardiovascular diseases, may allow detection of heart abnormalities at a very early stage without any medical staff involvement. Our method opens up the potential of turning smartphones into portable healthcare systems which can provide patients and general public an easy access to continuous healthcare monitoring. Additionally, given that our solution is mainly software based, it can be deployed on smartphones around the world with minimal costs.

Conference paper

Dickens L, Caldas B, Schoenhense B, Stan G-B, Faisal AAet al., 2015, The Moveable Feast of Predictive Reward Discounting in Humans, 2nd Multi-disciplinary Conference on Reinforcement Learning and Decision Making (RLDM)

Conference paper

Gavriel C, Thomik A, Lourenco P, Nageshwaran S, Athanasopoulos S, Sylaidi A, Festenstein R, Faisal AAet al., 2015, Towards neurobehavioral biomarkers for longitudinal monitoring ofneurodegeneration with wearable body sensor networks, IEEE Neural Engineering (NER), Pages: 348-351

This study focuses on the objective quantificationof the disease progression in patients with Friedreich’s Ataxia(FRDA) through the use of kinematic body sensor networktechnology. Currently, this quantification is performed througha series of task-oriented score-based metrics, which, althoughthey provide an efficient way of quantifying the ataxic disease,they are dependent on the assessor’s experience and they alsopresent high levels of variability. We used our ETHO1 inertialmotion capturing sensors for longitudinal monitoring of FRDApatients during sleep and we collected behavioural timeseriesfrom which we extracted biomarkers that can objectivelyhighlight the subtle changes of patients’ motor control system.These biomarkers exhibit trends consistent with the clinicalassessments of the disease.

Conference paper

Xiloyannis M, Gavriel C, Thomik A, Faisal AAet al., 2015, Dynamic forward prediction for prosthetic hand control by integrationof EMG, MMG and kinematic signals, IEEE Neural Engineering (NER), Pages: 611-614

We propose a new framework for extractinginformation from extrinsic muscles in the forearm that willallow a continuous, natural and intuitive control of a neuroprostheticdevices and robotic hands. This is achieved througha continuous mapping between muscle activity and joint anglesrather than prior discretisation of hand gestures. We instructed6 able-bodied subjects, to perform everyday object manipulationtasks. We recorded the Electromyographic (EMG) andMechanomyographic (MMG) activities of 5 extrinsic musclesof the hand in their forearm, while simultaneously monitoring11 joints of hand and fingers using a sensorised glove.We used these signals to train a Gaussian Process (GP)and a Vector AutoRegressive Moving Average model withExogenous inputs (VARMAX) to learn the mapping fromcurrent muscle activity and current joint state to predict futurehand configurations. We investigated the performances of bothmodels across tasks, subjects and different joints for varyingtime-lags, finding that both models have good generalisationproperties and high correlation even for time-lags in the orderof hundreds of milliseconds. Our results suggest that regressionis a very appealing tool for natural, intuitive and continuouscontrol of robotic devices, with particular focus on prostheticreplacements where high dexterity is required for complexmovements.

Conference paper

Xiloyannis M, Gavriel C, Thomik A, Faisal AAet al., 2015, Gaussian Process Regression for accurate prediction of prosthetic limb movements from the natural kinematics of intact limbs, IEEE Neural Engineering (NER), Pages: 659-662

We propose a Gaussian Process-based regressionframework for continuous prediction of the state of missinglimbs by exclusively decoding missing limb movements from intactlimbs – we achieve this as we have measured the correlationstructure and synergies of natural limb kinematics in daily life.Using the example of hand neuroprosthetic, we demonstratehow our model can use non-linear regression to infer the velocityof the flexion/extension joints of missing fingers by observingthe intact joints using a data glove. We based our frameworkon hand joint velocity data, that we recorded with a sensorisedglove from 7 able-bodied subjects performing everyday handmovements. We then simulate missing fingers by making ourregressors predict the motion that a neuroprosthetic fingershould execute based on the previously observed movements ofintact fingers. Perhaps surprisingly, we achieve and R^2 = 0.89and an RMSE= 0.20 deg/s across all missing joints. Moreover, byperforming one-subject-out cross validation, we can show thatthe prediction accuracy and precision has negligible significantloss of performance when tested on new subjects. This suggeststhat kinematic correlations in daily life can provide a powerfulchannel refining, if not driving, multi-source

Conference paper

Ferrante A, Gavriel C, Faisal AA, 2015, Towards a brain-derived neurofeedback framework for unsupervised personalisation of Brain-Computer Interfaces, IEEE Neural Engineering (NER), Pages: 162-165

Modern Brain Computer Interfaces (BCIs) useEEG signals recorded from the scalp to transduce a users intentinto action. However, achieving an optimal control requiresa physically and mentally demanding series of long-lastingtraining sessions based on the use of common neurofeedback.In this study we propose a framework that bypasses thetraining phase (unsupervised personalisation), where the BCIis automatically detecting whether it is acting according to theusers intention or not. We used mismatch negativity (MMN),a brain response elicited every time someone is exposed to anunexpected event. However, rather than the classical auditorymismatch negativity, we found another signature of the brain,which is elicited when the brain is subjected to an action thatbreaks the regularity (and so an expectation) of another actionpreviously happening – Action Mismatch. We investigated thepresence of Action Mismatch Signature (AMS) in an oddballparadigm where instead of a sound we replaced this by videosequences of a hand catching or missing a ball. Performingthis experiment on 8 people, our classifier achieves 67%average detection accuracy both across and within subjects. OurAMS signature may provide a powerful tool to automaticallymonitor and adapt Brain-Robot-Interface and Neuroprostheticperformance.

Conference paper

Ferrante A, Gavriel C, Faisal AA, 2015, Data-efficient hand motor imagery decoding in EEG-BCI by using Morlet Wavelets & Common Spatial Pattern Algorithms, IEEE Neural Engineering (NER), Pages: 948-951

EEG-based Brain Computer Interfaces (BCIs) arequite noisy brain signals recorded from the scalp (electroencephalography,EEG) to translate the user’s intent into action.This is usually achieved by looking at the pattern of brainactivity across many trials while the subject is imagining theperformance of an instructed action – the process known asmotor imagery. Nevertheless, existing motor imagery classificationalgorithms do not always achieve good performancesbecause of the noisy and non-stationary nature of the EEGsignal and inter-subject variability. Thus, current EEG BCItakes a considerable upfront toll on patients, who have to submitto lengthy training sessions before even being able to use theBCI. In this study, we developed a data-efficient classifier forleft/right hand motor imagery by combining in our patternrecognition both the oscillation frequency range and the scalplocation. We achieve this by using a combination of Morletwavelet and Common Spatial Pattern theory to deal with nonstationarityand noise. The system achieves an average accuracyof 88% across subjects and was trained by about a dozentraining (10-15) examples per class reducing the size of thetraining pool by up to a 100-fold, making it very data-efficientway for EEG BCI.

Conference paper

Sylaidi ANASTASIA, Rente Lourenço P, Nageshwaran S, Lin C-H, Rodriguez M, Festenstein R, Faisal AAet al., 2015, F2move: fMRI-compatible haptic object manipulation system for closed-loop motor control studies, 7th Annual International IEEE EMBS Conference on Neural Engineering, Publisher: IEEE, Pages: 1104-1107

Functional neuroimaging plays a key role in addressing open questions in systems and motor neuroscience directly applicable to brain machine interfaces. Building on our low-cost motion capture technology (fMOVE), we developed f2MOVE, an fMRI-compatible system for 6DOF goal-directed hand and wrist movements of human subjects enabling closed-loop sensorimotor haptic experiments with simultaneous neuroimaging. f2MOVE uses a high-zoom lens high frame rate camera and a motion tracking algorithm that tracks in real-time the position of special markers attached to a hand-held object in a novel customized haptic interface. The system operates with high update rate ($120$~Hz) and sufficiently low time delays ($<20$~ms) to enable visual feedback while complex, goal-oriented movements are recorded. We present here both the accuracy of our motion tracking against a reference signal and the efficacy of the system to evoke motor control specific brain activations in healthy subjects. Our technology and approach thus support the real-time, closed-loop study of the neural foundations of complex haptic motor tasks using neuroimaging.

Conference paper

Wu Y, Faisal AA, 2015, Towards an Integrative Spiking Neuron Model of Motor Control – From Cortex and Basal Ganglia to Muscles and Sensory Feedback, IEEE EMBS Neural Engineering (NER), Pages: 378-381

We developed an integrative spiking neuron framework to study motor learning and control across multiple levels of biological organisations from synaptic learning rules via neural populations and muscles to an arm's movements. Our framework is designed to simulate reward-based mo- tor learning processes by using identified cellular learning mechanisms (neuromodulation) and enable linking these to findings in human and primate motor learning experiments involving reaching movements. The key learning mechanisms are Actor/Critic reward-based learning and STDP synaptic plasticity rules. We simulate and study learning of planar reaching movements, where motor neuron activities drive Hill- type muscle models which mechanically translate forces into movements via a physics simulator. Our simulated brain is trained and tested in a reaching task with unknown dynamics following a psychophysics protocol. The framework is capable of learning the task and we can directly access the output of neuronal populations (e.g. M1, S1, VTA) as well as EMG- equivalent muscle activations, arm reaching trajectories and sensory feedback before, during and after motor learning. Our ability to simulate and explain motor learning across the levels of neural activity as well as psychophysics experiments will be useful in linking human motor learning experiments to their neuronal correlates. This system can thus provide incisive in silico proof-of-principle tests for understanding for neural engineering.

Conference paper

Thomik AAC, Fenske S, Faisal AA, 2015, Towards sparse coding of natural movements for neuroprosthetics and brain-machine interfaces, IEEE/EMBS Neural Engineering (NER), Publisher: IEEE Engineering in Medicine & Biology Society, Pages: 938-941

Conference paper

Thomik AAC, Faisal AA, 2015, Sparse Encoding of Complex Action Sequences, Cosyne

A fundamental problem in neuroscience is to understand how the brain translates a symbolic sequence of action descriptors, or high-level motor intention, into the appropriate muscle commands. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. We take the view that the brain achieves this feat by mapping the necessary computation onto a finite and low-dimensional subset of control building blocks of movement, characterised by high correlation between a subset of the joints involved – kinematic primitives. These would be combined as required to achieve a given task. We investigate this possibility by collecting a large data set natural behavior capturing 90% of the perception-action loop using lightweight, portable and unobtrusive motion capture systems over a prolonged period of time. From this data we learn in an unsupervised fashion a dictionary of kinematic primitives (which we term eigenmotions) by analysing the local temporal correlation structure of the data. We show that the dictionaries learnt are broadly consistent across subjects with minor variations accounting for individuality of the subject and variations in the tasks executed. Using this dictionary we can compute a sparse representation of the data which is characterised by a very low-dimensional latent structure. Using this latent representation we can translate the time-series of joint movements into a symbolic sequence (“behavioural barcode”), which captures both spatial and temporal structure of the behavior. Sequences of different eigenmotions thus represent a “language of movement” which we can analyse to find its grammatical structure, yielding an insight into how the brain may generate natural behavior by temporally sparse activation of “eigenmotion neurons”, similar to grasp-type specific neurons found in the monkey premotor cort

Conference paper

Whitby, Faisal AA, Parrinello S, 2015, Entropy measures of collective cell migration, Bulletin of the American Physical Society 60

Collective cell migration is a critical process during tissue formation and repair. To this end there is a need to develop tools to quantitatively measure the dynamics of collective cell migration obtained from microscopy data. Drawing on statistical physics we use entropy of velocity fields derived from dense optic flow to quantitatively measure collective migration. Using peripheral nerve repair after injury as experimental system, we study how Schwann cells, guided by fibroblasts, migrate in cord-like structures across the cut, paving a highway for neurons. This process of emergence of organised behaviour is key for successful repair, yet the emergence of leader cells and transition from a random to ordered state is not understood. We find fibroblasts induce correlated directionality in migrating Schwann cells as measured by a decrease in the entropy of motion vector. We show our method is robust with respect to image resolution in time and space, giving a principled assessment of how various molecular mechanisms affect macroscopic features of collective cell migration. Finally, the generality of our method allows us to process both simulated cell movement and microscopic data, enabling principled fitting and comparison of in silico to in vitro.

Conference paper

Thomik AAC, Faisal AA, 2015, Sparse Coding of Natural Human Motion Yields Eigenmotions Consistent Across People, APS March Meeting 2015

Providing a precise mathematical description of the structure of natural human movement is a challenging problem. We use a data-driven approach to seek a generative model of movement capturing the underlying simplicity of spatial and temporal structure of behaviour observed in daily life. In perception, the analysis of natural scenes has shown that sparse codes of such scenes are information theoretic efficient descriptors with direct neuronal correlates. Translating from perception to action, we identify a generative model of movement generation by the human motor system. Using wearable full-hand motion capture, we measure the digit movement of the human hand in daily life. We learn a dictionary of ``eigenmotions'' which we use for sparse encoding of the movement data. We show that the dictionaries are generally well preserved across subjects with small deviations accounting for individuality of the person and variability in tasks. Further, the dictionary elements represent motions which can naturally describe hand movements. Our findings suggest the motor system can compose complex movement behaviours out of the spatially and temporally sparse activation of "eigenmotion'' neurons, and is consistent with data on grasp-type specificity of specialised neurons in the premotor cortex.

Conference paper

Belic JJ, Faisal AA, 2015, Decoding of human hand actions to handle missing limbs in neuroprosthetics, Frontiers in Computational Neuroscience, Vol: 9, ISSN: 1662-5188

Journal article

Guo Y, Friston K, Faisal A, Hill S, Peng Het al., 2015, Preface, Pages: V-VI, ISBN: 9783319233437

Book chapter

Ticchi A, Faisal AA, 2015, Non-linear dynamics in recurrently connected neural circuits implement Bayesian inference by sampling, American Physical Society

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00539811&person=true&page=5&respub-action=search.html