Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

233 results found

Sengupta B, Faisal AA, Laughlin SB, Niven JEet al., 2013, The effect of cell size and channel density on neuronal information encoding and energy efficiency, Journal of Cerebral Blood Flow & Metabolism

Journal article

Tavares G, Faisal A, 2013, Scaling-laws of human broadcast communication enable distinction between human, corporate and robot twitter users, PloS one, Vol: 8, ISSN: 1932-6203

Journal article

Sim N, Gavriel C, Abbott WW, Faisal AAet al., 2013, The head mouse #x2014; Head gaze estimation #x0022;In-the-Wild #x0022; with low-cost inertial sensors for BMI use, Pages: 735-738-735-738, ISSN: 1948-3546

We present a wearable head-tracking device using inexpensive inertial sensors as an alternative head movement tracking system. This can be used as indicator of human movement intentions for Brain-Machine Interface (BMI) applications. Our system is capable of tracking head movements at high rates (100 Hz) and achieves R2 = 0.99 with a 2.5� RMSE against a ground-truth motion tracking system. The system tracks head movements over periods in the order of tens of minutes with little drift. The accuracy and precision of our system, together with its low response latency of � 20 ms make it an unconventional but effective system for human-computer interfacing: the "head mouse" controls the mouse cursor on a display based on head orientation alone, so that it matches the centre of a straight-onward looking user. Our head mouse is suitable for amputees and spinal chord injury patients who have lost control of their upper extremities. We show that naive test subjects are capable to write text using our system and an on-screen keyboard at a rate of 4.65 words/minute, compared to able bodied users using a physical computer mouse which reached 7.85 words/minute. Crucially we measure the natural head movements of able bodied computer users, and show that our approach falls within the range of natural head movement parameters.

Conference paper

Gavriel C, Faisal AA, 2013, Wireless kinematic body sensor network for low-cost neurotechnology applications #x201C;in-the-wild #x201D;, Pages: 1279-1282-1279-1282, ISSN: 1948-3546

We present an ultra-portable and low-cost body sensor network (BSN), which enables wireless recording of human motor movement kinematics and neurological signals in unconstrained, daily-life environments. This is crucial as activities of daily living (ADL) and thus metrics of everyday movement enable us to diagnose motor and neurological disorders in the patients context, and not artificial laboratory settings. Moreover, ADL kinematics inform us how to control neuroprosthetics and brain-machine interfaces in a natural manner. Our system uses a network of battery-powered embedded micro-controllers, to capture data from motion sensors placed all over the human body and wireless connectivity to stream process data in real time at 100 Hz. Our prototype compares well against two gold-standard measures, a ground-truth motion tracking system and high-end motion capture suit as reference. At 2.5% of the cost, performance in capturing natural joint kinematics are accurate R2 = 0.89 and precise RMSE = 1.19�. The system’s low-cost (approximately $100 per unit), wireless capability, low weight and millimetre-scale size allow subjects to be unconstrained in their actions while having the sensors attached to everyday clothing. These features establish our system’s usefulness in clinical studies, risk-group monitoring, neuroscience and neuroprosthetics.

Conference paper

Fara S, Vikram CS, Gavriel C, Faisal AAet al., 2013, Robust, ultra low-cost MMG system with brain-machine-interface applications, Pages: 723-726-723-726, ISSN: 1948-3546

Muscle activity is the basis of many brain-machine interface (BMI) applications, but the mainstream EMG-based technology to decode muscle activity has significant constraints when long-term BMI usage �in-the-wild� is required (e.g. controlling neuroprosthetics throughout the day). We use the surface mechanomyogram (MMG), the mechano-acoustic signal generated by lateral oscillations of the muscle fibres during muscle contraction, as source of reliable and robust information of muscle activity. We present our novel MMG sensor and instrumentation, which is designed to match the acoustical properties of muscle signals, while costing at � 10 USD per channel a fraction of current commercial systems. We are able to derive an �MMG Score� from our sensor-specific signal, which correlates linearly with isometric contraction forces. We test the effectiveness of our MMG system vs EMG using a simple BMI task, where subjects have to interactively control three distinct force states with their muscle activity. Crucially, our MMG Score is robust across subjects, thus calibration on one set of subjects, allows us to predict muscle force production from MMG on other subjects. This limits the need for re-calibration when (re)applying our MMG system to patients on a daily basis, important to minimise carer-dependence and maximise ease of use for BMI users.

Conference paper

Lowery C, Faisal AA, 2013, Towards efficient, personalized anesthesia using continuous reinforcement learning for propofol infusion control, 6th International IEEE/EMBS Conference on Neural Engineering (NER), Pages: 1414-1417, ISSN: 1948-3546

We demonstrate the use of reinforcement learning algorithms for efficient and personalized control of patients’ depth of general anesthesia during surgical procedures � an important aspect for Neurotechnology. We used the continuous actor-critic learning automaton technique, which was trained and tested in silico using published patient data, physiological simulation and the bispectral index (BIS) of patient EEG. Our two-stage technique learns first a generic effective control strategy based on average patient data (factory stage) and can then fine-tune itself to individual patients (personalization stage). The results showed that the reinforcement learner as compared to a bang-bang controller reduced the dose of the anesthetic agent administered by 9.4% and kept the patient closer to the target state, as measured by RMSE (4.90 compared to 8.47). It also kept the BIS error within a narrow, clinically acceptable range 93.9% of the time. Moreover, the policy was trained using only 50 simulated operations. Being able to learn a control strategy this quickly indicates that the reinforcement learner could also adapt regularly to a patient’s changing responses throughout a live operation and facilitate the task of anesthesiologists by prompting them with recommended actions.

Conference paper

Abbott WW, Zucconi A, Faisal AA, 2013, Large-field study of ultra low-cost, non-invasive task level BMI, Pages: 97-100-97-100, ISSN: 1948-3546

Current BMI technology requires significant development to enable patients with severe motor disabilities to obtain vital degrees of freedom in everyday life. State-of-the-art systems are expensive, require long training times and suffer from low patient uptake. We propose a non-invasive and ultra-low cost alternative � action intention decoding from 3D gaze signals. Building on our previous work, we present here a large field study (N=176 subjects) to understand how efficient our approach is at allowing subjects, from first use, to operate our BMI on the Pong BMI benchmark task. Within the first 30 seconds of first time use, the majority of subjects were able to play the arcade game pong against a computer. Subjects made on average 8.5�7.2 ball returns compared to the chance level of 2.6�2.5 obtained without input (mean�SD) and almost 5% even managed to beat the computer, despite having never used their eye-movements as a control input. This performance was achieved with members of the public at a scientific engagement event, not in stringent lab conditions and with minimal system calibration (30 s) and negligible user control learning (5 s countdown before ball released). This demonstrates the intuitive nature of gaze control and thus the clinical applicability of our approach.

Conference paper

Vicente AP, Faisal AA, 2013, Calibration of kinematic body sensor networks: Kinect-based gauging of data gloves #x201C;in the wild #x201D;, Pages: 1-6-1-6, ISSN: 2325-1425

Our hands generic precision and agility is yet unmatched by technology, hence the quantitative study of its daily life kinematics is fundamental to neurology/prosthetics & robotics and creative industries. State-of-the-art solutions capturing hand movements �in the wild� requires wearable body sensor networks: data gloves. Yet, fast-accurate calibration is challenging due to variability in hand anatomy and complexity of finger joints. We present here novel methods for calibration using streaming information from depth cameras (Microsoft Kinect). Our low-cost system calibrates the data glove by observing a user wiggling their hands while wearing data gloves. Using inverse kinematics we reconstruct in real-time hand configuration, enabling augmented reality by superimposing the virtual and real hand veridically. We achieve accuracies of �5 degrees RMSE over all 21 joints, almost 20% more accurate than standard calibration methods and accurately capture touching of fingertips and thumb � our benchmark test unmatched by other calibration methods.

Conference paper

Abbott WW, Faisal AA, 2013, Large-field study of gaze based ultra low-cost, non-invasive task level BMI, First International Workshop on Solutions for Automatic Gaze Data Analysis 2013 (SAGA 2013)

Conference paper

Mehraban Pour Behbahani F, Faisal AA, 2013, Visual categorisation is Bayesian and not Discriminative in humans, Society for Neuroscience (SfN)

Classification of objects is an innate and fundamental ability of our brain. The neuronal computations of human classification are, however, not well understood. From a computational perspective, the task of classification is the same for humans as for machines. In machine learning, classification algorithms fall broadly into 1. Discriminative algorithms: learn a direct mapping between input and label and 2. Generative algorithms: build first a model of how each category was formed then infer the category labels. These are typically Bayesian in nature. In neuroscience the framework of Bayesian Decision Theory has emerged as a principled way to explain how the brain has to act in the face of uncertainty - and has been very successful in explaining behaviour in perceptual, motor and cognitive tasks (Ernst & Banks, 2002, Kording & Wolpert, 2002, Faisal et al., 2008). However, previous work on human categorisation shows data that is consistent with both discriminative and generative classification (e.g. Hsu and Griffiths, 2010). However, the experimental design used could not capture the desired test proposed in our work. Here, we present a novel experiment that was specifically designed to unambiguously accept one method while simultaneously rejecting the other. We trained N=20 subjects to distinguish two classes, A and B of visual objects in two different tasks (two Persian-characters and armadillo-horse stick-drawings). The classes in each task were parameterised by two scalars; objects for each class are drawn from Gaussian parameter distributions, with equal variance and different means. During the experiment, we track the shift in the discrimination boundary as a result of exposing the subjects to the outliers for one category (i.e. increasing the variance of one category-A) after they were trained on categories of similar variance. Generative classifiers are by necessity sensitive to novel information becoming available during training, which updates beli

Conference paper

Abbott WW, Faisal AA, 2013, Large field study of ultra-low cost BMI using intention decoding from eye movements for closed loop control., Society for Neuroscience (SfN)

Developments in the field of brain machine interfaces (BMI) hold the hope to restore independence to patients with severe motor disabilities via neuroprosthetics. However, current technology is either highly invasive or suffers from long training times, low information transfer rates, high latencies and high clinical costs (Tonet et al., 2008, J. Neurosci. Methods). We propose a non-invasive and ultra-low cost alternative - intention decoding from 3D gaze signals (Abbott & Faisal, 2012, J. Neural Eng.). Eye movements provide a high frequency signal directly relevant for neuroprosthetic control and are retained by patients with serious motor deficiencies, paralysis and limb amputation (Kaminski et al., 2002, Ann. N. Y. Acad. Sci.; Kaminski et al. 1992, Ann. Neurol.). Our system allows read-out bit rates of 43 bit s-1, well beyond conventional BMIs (EEG 1.63 bit s-1, MEA 3.3 bit s-1, EMG 2.66 bit s-1 , making our task-level BMI suitable for closed loop control of neuroprosthetics (Tonet et al., 2008, J. Neurosci. Methods; Abbott & Faisal, 2012, J. Neural Eng.). In the present study we performed a large-scale field study (n=867) to determine if naïve subjects could use our BMI to compete in an arcade video game (Pong - computer tennis). Two arcade cabinets with our embedded BMI system were built that allowed members of the public to briefly calibrate and then to play a game of pong (first to 5 points), controlling the paddle using just their eyes. Their opponent was either an AI computer player or another human player using a conventional control pad (up-down) input. During game play, the eye position, gaze estimation, control position and game state were recorded. Following a 30 second calibration, games lasted 76±34 seconds and subjects successfully returned 6.5±6.2 shots against their opponent (mean ± standard deviation). The average score when the subjects lost was 0.7±1.1 compared to the opponent average losing score of 2.1

Conference paper

Thomik AAC, Haber D, Faisal AA, 2013, Real-time movement prediction for improved control of neuroprosthetic devices, Pages: 625-628-625-628, ISSN: 1948-3546

Replacing lost hands with prosthetic devices that offer the same functionality as natural limbs is an open challenge, as current technology is often limited to basic grasps by the low information readout. In this work, we develop a probabilistic inference-based method that allows for improved control of neuroprosthetic devices. We observe the behaviour of the undamaged limb to predict the most likely actions of lost limbs. Offline, our algorithm learns movement primitives (e.g. various types of grasps) from a database of recordings from healthy subjects performing everyday activities. Online, it performs Bayesian inference to determine the currently active movement primitive from the observed limbs and estimates the most likely movement of the missing limbs from the training data. We can demonstrate on test data that this two-stage approach yields statistically significantly higher prediction accuracy than linear regression approaches that reconstruct limb movements from their overall correlation structure.

Conference paper

Haber D, Faisal AA, 2013, Large-Scale Extraction, Recognition and Prediction of Movement Primitives

Journal article

Abbott WW, Faisal AA, 2013, Ultra low-cost 3D gaze estimation, Publisher: Pfeiffer and Essig

Conference paper

Mehraban Pour Behbahani F, Faisal AA, 2012, Human category learning is consistent with Bayesian generative but not discriminative classification strategies, Bernstein Conference

Since an early age, humans have adopted the ability to group visual entities into categories. Mechanisms that correspond to categorization processes, are typically investigated at both neuronal (Freedman, 2011) and behavioural contexts. Here we step back and design tests with the aim of realising which computational process is used by our brain for forming categories (i.e. classification). In machine learning and pattern recognition two types of classification algorithms are known: 1. Generative and 2. Discriminative approaches. Generative approaches solve the categorization problem by building a probabilistic model of how each category was formed and infer category labels. In contrast, the discriminative approach learns a direct mapping between input and category labels. Recent work (Hsu and Griffiths, 2010) shows human classification is consistent with discriminative and generative classification depending on task conditions. We hypothesize that humans employ generative mechanisms for classification, when not encouraged otherwise. To test this we exploit a counterintuitive prediction for generative classification, namely how the discrimination boundary between two classes shifts if one category's distribution is revealed to be broader during learning. We trained N=17 subjects to distinguish two classes, A and B of visual objects in two different tasks (two Persian-characters and armadillo-horse stick-drawings). The classes in each task were parameterized by two scalars; objects for each class are drawn from Gaussian parameter distributions, with equal variance and different means (class "prototypes"). Next, subjects classify unlabelled examples drawn between the two classes, so we can infer their discrimination boundary. This process is then repeated but includes training data for class A, which lie far away from B. Counter-intuitively, generative classification predicts a shift of the discrimination boundary closer to B. Conversely, discriminative cla

Conference paper

Abbott WW, Faisal AA, 2012, Ultra-low-cost 3D gaze estimation: an intuitive high information throughput compliment to direct brain-machine interfaces, JOURNAL OF NEURAL ENGINEERING, Vol: 9, ISSN: 1741-2560

Journal article

Thomik A, Faisal AA, 2012, Deriving motion primitives from naturalistic hand movements for neuroprosthetic control, Frontiers in Computational Neuroscience, Vol: 256

Journal article

Mehraban Pour Behbahani F, Faisal AA, 2012, Visual Object Classification is Consistent with Bayesian Generative Representations, Cosyne

Conference paper

Faisal AA, 2012, Noise in neurons and other constraints, Computational Systems Neurobiology, Publisher: Springer Netherlands, Pages: 227-257, ISBN: 9400738579

Book chapter

Aldo F, 2012, How internal metabolic state changes motor coordination computations in reaching movements, Frontiers in Computational Neuroscience, Vol: 6

Journal article

Mehraban Pour Behbahani F, Faisal AA, 2012, Human category learning is consistent with Bayesian generative but not discriminative classification strategies, Frontiers in Computational Neuroscience

Journal article

Neishabouri A, Finn A, Pristera A, Okuse K, Faisal AAet al., 2012, Stochastic simulations reveal how clustering sodium ion channels in thin axons more than doubles the metabolic efficiency of action potentials, Frontiers in Computational Neuroscience

Journal article

Mehraban Pour Behbahani F, Faisal AA, 2012, Visual Object Classification is Consistent with Bayesian Generative Representations, Cosyne

The ability to learn and distinguish categories is essential for human behavior, and the underlying neural computations are actively investigated (Freedman, 2011). Taking a normative view, we can relate categorisation to the distinction between generative and discriminative classification in machine learning. Generative approaches solve the categorization problem by building a probabilistic model of how each category was formed and infer then category labels. In contrast, the discriminative approach learns a direct mapping between input and label. Recent work (Hsu and Griffiths, 2010) shows human classification is consistent with discriminative and generative classification depending on conditions. We hypothesize that humans employ generative mechanisms for classification, when not encouraged otherwise. To test this we exploit a counterintuitive prediction for generative classification, namely how the discrimination boundary between two classes shifts if one category’s distribution is revealed to be broader during learning. We trained N=17 subjects to distinguish two classes, A and B in two tasks (two Persian-characters, armadillo-horse stick-drawings). The classes in each task were parameterized by two scalars; objects for each class are drawn from Gaussian parameter distributions, with equal variance and different means (class “prototypes”). Next, subjects classify unlabelled examples drawn between the classes, so we can infer their discrimination boundary. This process is then repeated but includes training data for class A, which lie far away from B. Counter-intuitively, generative classification predicts a shift of the discrimination boundary closer to B. Conversely, discriminative classifiers will show either no shift of the boundary or a shift of the boundary away from class B. Our results show that categorization in both tasks is consistent with generative and not discriminative classifiers, as classification boundaries shifted towards B fo

Conference paper

Abramova E, Kuhn D, Faisal A, 2011, Combining Markov Decision Processes with Linear Optimal Controllers, Pages: 3-3

Conference paper

Pregno G, Zamburlin P, Gambarotta G, Farcito S, Licheri V, Fregnan F, Perroteau I, Lovisolo D, Bovolin Pet al., 2011, Neuregulin1/ErbB4-induced migration in ST14A striatal progenitors: calcium-dependent mechanisms and modulation by NMDA receptor activation, BMC NEUROSCIENCE, Vol: 12, ISSN: 1471-2202

Journal article

Satti R, Deakin G, Tanaka RJ, Faisal Aet al., 2011, Genesforadaptationandlearningspanningevolution:computationalcomparisonbetweensynaptictransmissionandchemotacticsignalingproteinnetworks, 20th Annual Computational Neuroscience Meeting

Conference paper

Taylor S, Faisal A, 2011, Does the cost function of human motor control depend on the internal metabolic state?, BMC Neuroscience, Vol: 12, Pages: P99-P99

Journal article

Abbott WW, Faisal AA, 2011, Ultra-low cost eyetracking as an high-information throughput alternative to BMIs, Annual Computational Neuroscience Meeting (CNS)

Advancement in Brain machine interfaces (BMIs) holds the hope to restore vital degrees of independence for patients with high-level neurological disorders, improving their quality of life while reducing their dependency on others [1]. Unfortunately these emerging rehabilitative methods come at considerable clinical and post-clinical operational costs, beyond the means of the majority of patients [1]. Here we consider an alternative: eye movements. Eye movements provide a feasible alternative BMI basis as they tend to be spared degradation by neurological disorders such as Muscular dystrophy, high-level spinal injuries and multiple sclerosis because the Occululomotor system is innervated from the midbrain rather than the spinal column. Eye tracking and gaze based interaction is a long established field, however cost, accuracy and effective system integration have meant these systems are not widely used. We have developed an ultra-low cost 3D gaze-tracker based on mass market video game equipment that matches the performance of commercial systems 500 times as expensive.We developed a calibration method for 3D eye gaze, that requires only a standard computer monitor as opposed to 3D equipment, no information on eye geometry, and allows free head movement following calibration. Our method enables us to track eye movements off the computer screen, e.g. to drive a wheel chair or the end-point of a prosthetic arm. Unlike other BMIs, training is virtually unnecessary as the control intention can be taken from natural eye movements. By tracking both eyes, a significantly higher information rate can be obtained both by making 2D gaze estimates more accurate and by adding another spatial dimension in which to make commands. Our ultra-low cost 3D eye tracking system operates at below 50 USD material cost with 0.5-1 degree resolution at 120 Hz sample rate, we achieved this by reverse engineering mass-marketed video console components. This high-speed accuracy allows us to drive

Conference paper

Faisal A, Stout D, Apel J, Bradley Bet al., 2010, The Manipulative Complexity of Lower Paleolithic Stone Toolmaking, PLOS ONE, Vol: 5, ISSN: 1932-6203

Journal article

Faisal AA, 2010, Stochastic Simulation of Neurons, Axons, and Action Potentials, Stochastic Methods in Neuroscience, ISBN: 9780199235070

Variability is inherent in neurons. To account for variability we have to make use of stochastic models. We will take a look at this biologically more rigorous approach by studying the fundamental signal of our brain's neurons: the action potential and the voltage-gated ion channels mediating it. We will discuss how to model and simulate the action potential stochastically. We review the methods and show that classic stochastic approximation methods fail at capturing important properties of the highly nonlinear action potential mechanism, making the use of accurate models and simulation methods essential for understanding the neural code. We will review what stochastic modelling has taught us about the function, structure, and limits of action potential signalling in neurons, the most surprising insight being that stochastic effects of individual signalling molecules become relevant for whole-cell behaviour. We suggest that most of the experimentally observed neuronal variability can be explained from the bottom-up as generated by molecular sources of thermodynamic noise.

Book chapter

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: limit=30&id=00539811&person=true&page=7&respub-action=search.html