Imperial College London

Dr A. Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

204 results found

Stout D, Chaminade T, Apel J, Shafti A, Faisal AAet al., 2021, The measurement, evolution, and neural representation of action grammars of human behavior, Scientific Reports, Vol: 11

Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same “alphabet” of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition.

Journal article

Ortega P, Faisal A, 2021, Deep Learning multimodal fNIRS & EEG signals for bimanual grip force decoding, Journal of Neural Engineering, ISSN: 1741-2560

Journal article

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Conference paper

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Conference paper

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, ICML2021 workshop on Interpretable Machine Learning in Healthcare, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Conference paper

Wu Y, Haar S, Faisal A, 2021, Reproducing Human Motor Adaptation in Spiking Neural Simulation and known Synaptic Learning Rules

<jats:title>Abstract</jats:title><jats:p>Sensorimotor adaptation enables us to adjust our goal-oriented movements in response to external perturbations. These phenomena have been studied experimentally and computationally at the level of human and animals reaching movements, and have clear links to the cerebellum as evidenced by cerebellar lesions and neurodegeneration. Yet, despite our macroscopic understanding of the high-level computational mechanisms it is unclear how these are mapped and are implemented in the neural substrates of the cerebellum at a cellular-computational level. We present here a novel spiking neural circuit model of the sensorimotor system including a cerebellum which control physiological muscle models to reproduce behaviour experiments. Our cerebellar model is composed of spiking neuron populations reflecting cells in the cerebellar cortex and deep cerebellar nuclei, which generate motor correction to change behaviour in response to perturbations. The model proposes two learning mechanisms for adaptation: predictive learning and memory formation, which are implemented with synaptic updating rules. Our model is tested in a force-field sensorimotor adaptation task and successfully reproduce several phenomena arising from human adaptation, including well-known learning curves, aftereffects, savings and other multi-rate learning effects. This reveals the capability of our model to learn from perturbations and generate motor corrections while providing a bottom-up view for the neural basis of adaptation. Thus, it also shows the potential to predict how patients with specific types of cerebellar damage will perform in behavioural experiments. We explore this by <jats:italic>in silico</jats:italic> experiments where we selectively incapacitate selected cerebellar circuits of the model which generate and reproduce defined motor learning deficits.</jats:p><jats:sec><jats:title>Author summary</jats:t

Journal article

Festor P, Habil I, Jia Y, Gordon A, Faisal A, Komorowski Met al., 2021, Levels of Autonomy & Safety Assurance forAI-based Clinical Decision Systems, WAISE 2021 : 4th International Workshop on Artificial Intelligence Safety Engineering

Conference paper

Faisal AA, 2021, Putting touch into action, Science, Vol: 372, Pages: 791-792, ISSN: 0036-8075

Journal article

Patel BV, Haar S, Handslip R, Auepanwiriyakul C, Lee TM-L, Patel S, Harston JA, Hosking-Jervis F, Kelly D, Sanderson B, Borgatta B, Tatham K, Welters I, Camporota L, Gordon AC, Komorowski M, Antcliffe D, Prowle JR, Puthucheary Z, Faisal AAet al., 2021, Natural history, trajectory, and management of mechanically ventilated COVID-19 patients in the United Kingdom, Intensive Care Medicine, Vol: 47, Pages: 549-565, ISSN: 0342-4642

PurposeThe trajectory of mechanically ventilated patients with coronavirus disease 2019 (COVID-19) is essential for clinical decisions, yet the focus so far has been on admission characteristics without consideration of the dynamic course of the disease in the context of applied therapeutic interventions.MethodsWe included adult patients undergoing invasive mechanical ventilation (IMV) within 48 h of intensive care unit (ICU) admission with complete clinical data until ICU death or discharge. We examined the importance of factors associated with disease progression over the first week, implementation and responsiveness to interventions used in acute respiratory distress syndrome (ARDS), and ICU outcome. We used machine learning (ML) and Explainable Artificial Intelligence (XAI) methods to characterise the evolution of clinical parameters and our ICU data visualisation tool is available as a web-based widget (https://www.CovidUK.ICU).ResultsData for 633 adults with COVID-19 who underwent IMV between 01 March 2020 and 31 August 2020 were analysed. Overall mortality was 43.3% and highest with non-resolution of hypoxaemia [60.4% vs17.6%; P < 0.001; median PaO2/FiO2 on the day of death was 12.3(8.9–18.4) kPa] and non-response to proning (69.5% vs.31.1%; P < 0.001). Two ML models using weeklong data demonstrated an increased predictive accuracy for mortality compared to admission data (74.5% and 76.3% vs 60%, respectively). XAI models highlighted the increasing importance, over the first week, of PaO2/FiO2 in predicting mortality. Prone positioning improved oxygenation only in 45% of patients. A higher peak pressure (OR 1.42[1.06–1.91]; P < 0.05), raised respiratory component (OR 1.71[ 1.17–2.5]; P < 0.01) and cardiovascular component (OR 1.36 [1.04–1.75]; P < 0.05) of the sequential organ failure assessment (SOFA) score and raised lactate (OR 1.33 [0.99–1.79

Journal article

Pieritz S, Khwaja M, Faisal A, Matic Aet al., 2021, Personalised recommendations in mental health Apps: the impact of autonomy and data sharing, ACM Conference on Human Factors in Computing Systems (CHI), Publisher: ACM

The recent growth of digital interventions for mental well-being prompts a call-to-arms to explore the delivery of personalised recommendations from a user's perspective. In a randomised placebo study with a two-way factorial design, we analysed the difference between an autonomous user experience as opposed to personalised guidance, with respect to both users’ preference and their actual usage of a mental well-being app. Furthermore, we explored users’ preference in sharing their data for receiving personalised recommendations, by juxtaposing questionnaires and mobile sensor data. Interestingly, self-reported results indicate the preference for personalised guidance, whereas behavioural data suggests that a blend of autonomous choice and recommended activities results in higher engagement. Additionally, although users reported a strong preference of filling out questionnaires instead of sharing their mobile data, the data source did not have any impact on the actual app use. We discuss the implications of our findings and provide takeaways for designers of mental well-being applications.

Conference paper

Kadirvelu B, Burcea G, Quint JK, Costelloe CE, Faisal AAet al., 2021, Covid-19 does not look like what you are looking for: Clustering symptoms by nation and multi-morbidities reveal substantial differences to the classical symptom triad

<jats:title>ABSTRACT</jats:title><jats:p>COVID-19 is by convention characterised by a triad of symptoms: cough, fever and loss of taste/smell. The aim of this study was to examine clustering of COVID-19 symptoms based on underlying chronic disease and geographical location. Using a large global symptom survey of 78,299 responders in 190 different countries, we examined symptom profiles in relation to geolocation (grouped by country) and underlying chronic disease (single, co- or multi-morbidities) associated with a positive COVID-19 test result using statistical and machine learning methods to group populations by underlying disease, countries, and symptoms. Taking the responses of 7980 responders with a COVID-19 positive test in the top 5 contributing countries, we find that the most frequently reported symptoms differ across the globe: For example, fatigue 4108(51.5%), headache 3640(45.6%) and loss of smell and taste 3563(44.6%) are the most reported symptoms globally. However, symptom patterns differ by continent; India reported a significantly lower proportion of headache (22.8% vs 45.6%, p&lt;0.05) and itchy eyes (7.0% vs. 15.3%, p&lt;0.05) than other countries, as does Pakistan (33.6% vs 45.6%, p&lt;0.05 and 8.6% vs 15.3%, p&lt;0.05). Mexico and Brazil report significantly less of these symptoms. As with geographic location, we find people differed in their reported symptoms, if they suffered from specific underlying diseases. For example, COVID-19 positive responders with asthma or other lung disease were more likely to report shortness of breath as a symptom, compared with COVID-19 positive responders who had no underlying disease (25.3% vs. 13.7%, p&lt;0.05, and 24.2 vs.13.7%, p&lt;0.05). Responders with no underlying chronic diseases were more likely to report loss of smell and tastes as a symptom (46%), compared with the responders with type 1 diabetes (21.3%), Type 2 diabetes (33.5%) lung disease (29.3%), or hype

Journal article

Lannou EL, Post B, Haar S, Brett SJ, Kadirvelu B, Faisal AAet al., 2021, Clustering of patient comorbidities within electronic medical records enables high-precision COVID-19 mortality prediction

<jats:title>Abstract</jats:title><jats:p>We present an explainable AI framework to predict mortality after a positive COVID-19 diagnosis based solely on data routinely collected in electronic healthcare records (EHRs) obtained prior to diagnosis. We grounded our analysis on the ½ Million people UK Biobank and linked NHS COVID-19 records. We developed a method to capture the complexities and large variety of clinical codes present in EHRs, and we show that these have a larger impact on risk than all other patient data but age. We use a form of clustering for natural language processing of the clinical codes, specifically, topic modelling by Latent Dirichlet Allocation (LDA), to generate a succinct digital fingerprint of a patient’s full secondary care clinical history, i.e. their comorbidities and past interventions. These digital comorbidity fingerprints offer immediately interpretable clinical descriptions that are meaningful, e.g. grouping cardiovascular disorders with common risk factors but also novel groupings that are not obvious. The comorbidity fingerprints differ in both their breadth and depth from existing observational disease associations in the COVID-19 literature. Taking this data-driven approach allows us to avoid human-induction bias and confirmation bias during selection of what are important potential predictors of COVID-19 mortality. Together with age, these digital fingerprints are the single most important factor in our predictor. This holds the potential for improving individual risk profiling for clinical decisions and the identification of groups for public health interventions such as vaccine programmes. Combining our digital precondition fingerprints with demographic characteristics allow us to match or exceed the performance of existing state-of-the-art COVID-19 mortality predictors (EHCF) which have been developed through expert consensus. Our precondition fingerprinting and entire mortality prediction anal

Journal article

Dudley WL, Faisal A, Shafti SA, 2021, Real-world to virtual - flexible and scalable investigations of human-agent collaboration, CHI 2021, Publisher: ACM

Conference paper

Wei X, Ortega P, Faisal A, 2021, Inter-subject deep transfer learning for motor imagery EEG decoding, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

Convolutional neural networks (CNNs) have be-come a powerful technique to decode EEG and have become the benchmark for motor imagery EEG Brain-Computer-Interface (BCI) decoding. However, it is still challenging to train CNNs on multiple subjects’ EEG without decreasing individual performance. This is known as the negative transfer problem, i.e. learning from dissimilar distributions causes CNNs to misrepresent each of them instead of learning a richer representation. As a result, CNNs cannot directly use multiple subjects’ EEG to enhance model performance directly. To address this problem, we extend deep transfer learning techniques to the EEG multi-subject training case. We propose a multi-branch deep transfer network, the Separate-Common-Separate Network (SCSN) based on splitting the network’s feature extractors for individual subjects. We also explore the possibility of applying Maximum-mean discrepancy (MMD) to the SCSN (SCSN-MMD) to better align distributions of features from individual feature extractors. The proposed network is evaluated on the BCI Competition IV 2a dataset (BCICIV2adataset) and our online recorded dataset. Results show that the proposed SCSN (81.8%, 53.2%) and SCSN-MMD (81.8%,54.8%) outperformed the benchmark CNN (73.4%, 48.8%) on both datasets using multiple subjects. Our proposed networks show the potential to utilise larger multi-subject datasets to train an EEG decoder without being influenced by negative transfer.

Conference paper

Shafti SA, Faisal A, 2021, Non-invasive cognitive-level human interfacing for the roboticrestoration of reaching & grasping, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

Assistive and Wearable Robotics have the potential to support humans with different types of motor impairments to become independent and fulfil their activities of daily living successfully. The success of these robot systems, however, relies on the ability to meaningfully decode human action intentions and carry them out appropriately. Neural interfaces have been explored for use in such system with several successes, however, they tend to be invasive and require training periods in the order of months. We present a robotic system for human augmentation, capable of actuating the user’s arm and fingers for them, effectively restoring the capability of reaching, grasping and manipulating objects; controlled solely through the user’s eye movements. We combine wearable eye tracking, the visual context of the environment and the structural grammar of human actions to create a cognitive-level assistive robotic setup that enables the users in fulfilling activities of daily living, while conserving interpretability, and the agency of the user. The interface is worn, calibrated and ready to use within 5 minutes. Users learn to control and make successful use of the system with an additional 5 minutes of interaction. The system is tested with 5 healthy participants, showing an average success rate of96.6%on first attempt across6 tasks.

Conference paper

Ortega San Miguel P, Zhao T, Faisal AA, 2021, Deep real-time decoding of bimanual grip force from EEG & fNIRS, 10th International IEEE EMBS Conference on Neural Engineering (NER 2021), Publisher: IEEE

Non-invasive cortical neural interfaces have only achieved modest performance in cortical decoding of limb movements and their forces, compared to invasive brain-computer interfaces (BCIs). While non-invasive methodologies are safer, cheaper and vastly more accessible technologies, signals suffer from either poor resolution in the space domain(EEG) or the temporal domain (BOLD signal of functional Near Infrared Spectroscopy, fNIRS). The non-invasive BCI decoding of bimanual force generation and the continuous force signal has not been realised before and so we introduce an isometric grip force tracking task to evaluate the decoding. We find that combining EEG and fNIRS using deep neural networks works better than linear models to decode continuous grip force modulations produced by the left and the right hand. Our multi-modal deep learning decoder achieves 55.2 FVAF[%] in force reconstruction and improves the decoding performance by at least 15% over each individual modality. Our results show away to achieve continuous hand force decoding using cortical signals obtained with non-invasive mobile brain imaging has immediate impact for rehabilitation, restoration and consumer applications.

Conference paper

Wannawas N, Subramanian M, Faisal A, 2021, Neuromechanics-based deep reinforcement learning of neurostimulation control in FES cycling, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

Functional Electrical Stimulation (FES) can re-store motion to a paralysed’s person muscles. Yet, control stimulating many muscles to restore the practical function of entire limbs is an unsolved problem. Current neurostimulation engineering still relies on 20th Century control approaches and correspondingly shows only modest results that require daily tinkering to operate at all. Here, we present our state-of-the-art Deep Reinforcement Learning developed for real-time adaptive neurostimulation of paralysed legs for FES cycling. Core to our approach is the integration of a personalised neuromechanical component into our reinforcement learning (RL) framework that allows us to train the model efficiently–without demanding extended training sessions with the patient and working out-of-the-box. Our neuromechanical component includes merges musculoskeletal models of muscle/tendon function and a multi-state model of muscle fatigue, to render the neurostimulation responsive to a paraplegic’s cyclist instantaneous muscle capacity. Our RL approach outperforms PID and Fuzzy Logic controllers in accuracy and performance. Crucially, our system learned to stimulate a cyclist’s legs from ramping up speed at the start to maintaining a high cadence in steady-state racing as the muscles fatigue. A part of our RL neurostimulation system has been successfully deployed at the Cybathlon 2020 bionic Olympics in the FES discipline with our paraplegic cyclist winning the Silver medal among 9 competing teams.

Conference paper

Subramanian M, Park S, Orlov P, Shafti A, Faisal Aet al., 2021, Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform, 10th International IEEE EMBS Conference on Neural Engineering, Publisher: IEEE

We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms by decoding how the user looks at the environment to understand where they want to navigate their mobility device. However, many natural eye-movements are not relevant for action intention decoding, only some are, which places a challenge on decoding, the so-called Midas Touch Problem. Here, we present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view, with 2. an analysis of where on the object’s bounding box the user is looking, to 3. use a simple machine learning classifier to determine whether the overt visual attention on the object is predictive of a navigation intention to that object. Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it. Crucially, we find that when users look at an object and imagine they were moving towards it, the resulting eye-movements from this motor imagery (akin to neural interfaces) remain decodable. Once a driving intention and thus also the location is detected our system instructs our autonomous wheelchair platform, the A. Eye-Drive, to navigate to the desired object while avoiding static and moving obstacles. Thus, for navigation purposes, we have realised a cognitive-level human interface, as it requires the user only to cognitively interact with the desired goal, not to continuously steer their wheelchair to the target (low-level human interfacing).

Conference paper

Denghao L, Ortega San Miguel P, Wei X, Faisal AAet al., 2021, Model-agnostic meta-learning for EEG motor imagery decoding in brain-computer-interfacing, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

We introduce here the idea of Meta Learning for training EEG BCI decoders. Meta Learning is a way of training machine learning systems so they learn to learn. We apply here meta learning to a simple Deep Learning BCI architecture and compare it to transfer learning on the same architecture. Our Meta learning strategy operates by finding optimal parameters for the BCI decoder so that it can quickly generalise between different users and recording sessions –thereby also generalising to new users or new sessions quickly. We tested our algorithm on the Physionet EEG motor imagery dataset. Our approach increased motor imagery classification accuracy between 60 to 80%, outperforming other algorithms under the little-data condition. We believe that establishing the meta learning or learning-to-learn approach will help neural engineering and human interfacing with the challenges of quickly setting up decoders of neural signals to make them more suitable for daily-life.

Conference paper

Ortega San Miguel P, Faisal AA, 2021, HemCNN: Deep Learning enables decoding of fNIRS cortical signals in hand grip motor tasks, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE

We solve the fNIRS left/right hand force decoding problem using a data-driven approach by using a convolutional neural network architecture, the HemCNN. We test HemCNN’s decoding capabilities to decode in a streaming way the hand, left or right, from fNIRS data. HemCNN learned to detect which hand executed a grasp at a naturalistic hand action speed of1Hz, outperforming standard methods. Since HemCNN does not require baseline correction and the convolution operation is invariant to time translations, our method can help to unlock fNIRS for a variety of real-time tasks. Mobile brain imaging and mobile brain machine interfacing can benefit from this to develop real-world neuroscience and practical human neural interfacing based on BOLD-like signals for the evaluation, assistance and rehabilitation of force generation, such as fusion of fNIRS with EEG signals.

Conference paper

Haar Millo S, Sundar G, Faisal A, 2021, Embodied virtual reality for the study of real-world motor learning, PLoS One, Vol: 16, ISSN: 1932-6203

Motor-learning literature focuses on simple laboratory-tasks due to their controlled manner and the ease to apply manipulations to induce learning and adaptation. Recently, we introduced a billiards paradigm and demonstrated the feasibility of real-world-neuroscience using wearables for naturalistic full-body motion-tracking and mobile-brain-imaging. Here we developed an embodied virtual-reality (VR) environment to our real-world billiards paradigm, which allows to control the visual feedback for this complex real-world task, while maintaining sense of embodiment. The setup was validated by comparing real-world ball trajectories with the trajectories of the virtual balls, calculated by the physics engine. We then ran our short-term motor learning protocol in the embodied VR. Subjects played billiard shots when they held the physical cue and hit a physical ball on the table while seeing it all in VR. We found comparable short-term motor learning trends in the embodied VR to those we previously reported in the physical real-world task. Embodied VR can be used for learning real-world tasks in a highly controlled environment which enables applying visual manipulations, common in laboratory-tasks and rehabilitation, to a real-world full-body task. Embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor-learning components, thus enabling addressing the current questions of motor-learning in real-world tasks. Such a setup can potentially be used for rehabilitation, where VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment.

Journal article

Auepanwiriyakul C, Waibel S, Songa J, Bentley P, Faisal AAet al., 2020, Accuracy and acceptability of wearable motion tracking for inpatient monitoring using smartwatches, Sensors, Vol: 20, ISSN: 1424-8220

Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), and optical motion tracking (OptiTrack). Given the moderate to strong performance of the consumer-grade sensors, we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N = 44) and staff (N = 15) following a clinical test in which patients wore smartwatches for 1.5–24 h in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple Series 5 and 3 smartwatches and Axivity AX3 (RMSE 1.66 ± 0.12 m·s−2; R2 0.78 ± 0.02; RMSE 2.29 ± 0.09 m·s−2; R2 0.56 ± 0.01; RMSE 2.14 ± 0.09 m·s−2; R2 0.49 ± 0.02; RMSE 4.12 ± 0.18 m·s−2; R2 0.34 ± 0.01 respectively). For angular velocity, Series 5 and 3 smartwatches achieved similar performances against Xsens with RMSE 0.22 ± 0.02 rad·s−1; R2 0.99 ± 0.00; and RMSE 0.18 ± 0.01 rad·s−1; R2 1.00± SE 0.00, respectively. Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long-term use, and do not cause anxiety or limit daily activities. Our results suggest that consumer smartwatches achieved moderate to strong levels of accuracy compared to laboratory gold-standard and are acceptable for pervasive monitoring of motion/behaviour within hospital settings.

Journal article

Li L, Faisal A, 2020, Bayesian distributional policy gradients, AAAI Conference on Artificial Intelligence, Publisher: AAAI

Distributional reinforcement learning (Distributional RL)maintains the entire probability distribution of the reward-to-go, i.e. the return, providing a more principled approach to account for the uncertainty associated with policy performance, which may be beneficial for trading off exploration and exploitation and policy learning in general. Previous work in distributional RL focused mainly on computing the state-action-return distributions, here we model the state-return distributions. This enables us to translate successful conventional RL algorithms that are based on state values into distributional RL. We formulate the distributional Bell-man operation as an inference-based auto-encoding process that minimises Wasserstein metrics between target/model re-turn distributions. Our algorithm, BDPG (Bayesian Distributional Policy Gradients), uses adversarial training in joint-contrastive learning to learn a variational posterior from there turns. Moreover, we can now interpret the return prediction uncertainty as an information gain, which allows to obtain anew curiosity measure that helps BDPG steer exploration actively and efficiently. In our experiments, Atari 2600 games and MuJoCo tasks, we demonstrate how BDPG learns generally faster and with higher asymptotic performance than reference distributional RL algorithms, including well known hard exploration tasks.

Conference paper

Gallego-Delgado P, James R, Browne E, Meng J, Umashankar S, Tan L, Picon C, Mazarakis ND, Faisal AA, Howell OW, Reynolds Ret al., 2020, Neuroinflammation in the normal-appearing white matter (NAWM) of the multiple sclerosis brain causes abnormalities at the nodes of Ranvier., PLoS Biology, Vol: 18, Pages: 1-36, ISSN: 1544-9173

Changes to the structure of nodes of Ranvier in the normal-appearing white matter (NAWM) of multiple sclerosis (MS) brains are associated with chronic inflammation. We show that the paranodal domains in MS NAWM are longer on average than control, with Kv1.2 channels dislocated into the paranode. These pathological features are reproduced in a model of chronic meningeal inflammation generated by the injection of lentiviral vectors for the lymphotoxin-α (LTα) and interferon-γ (IFNγ) genes. We show that tumour necrosis factor (TNF), IFNγ, and glutamate can provoke paranodal elongation in cerebellar slice cultures, which could be reversed by an N-methyl-D-aspartate (NMDA) receptor blocker. When these changes were inserted into a computational model to simulate axonal conduction, a rapid decrease in velocity was observed, reaching conduction failure in small diameter axons. We suggest that glial cells activated by pro-inflammatory cytokines can produce high levels of glutamate, which triggers paranodal pathology, contributing to axonal damage and conduction deficits.

Journal article

Auepanwiriyakul C, Waibel S, Songa J, Bentley P, Faisal AAet al., 2020, Accuracy and Acceptability of Wearable Motion Tracking Smartwatches for Inpatient Monitoring, Sensors, ISSN: 1424-8220

<jats:p>: Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), relative to gold-standard optical motion tracking (OptiTrack). Given the moderate to the strong performance of the consumer-grade sensors we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N=44) and staff (N=15) following a clinical test in which patients wore smartwatches for 1.5-24 hours in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple smartwatches and Axivity AX3 (RMSE 0.17+/-0.01 g; R2 0.88+/-0.01; RMSE 0.22+/-0.01 g; R2 0.64+/-0.01; RMSE 0.42+/-0.01 g; R2 0.43+/-0.01, respectively). However, for angular velocity, the smartwatches are marginally more accurate than Xsens (RMSE 1.28+/-0.01 rad/s; R2 0.85+/-0.00; RMSE 1.37+/-0.01 rad/s; R2 0.82+/-0.01, respectively). Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long term use, and do not cause anxiety or limit daily activities. Our results suggest that smartwatches achieved moderate to strong levels of accuracy compared to a gold-standard reference and are likely to be accepted as a pervasive measure of motion/behaviour within hospitals.</jats:p>

Journal article

Haar Millo S, van Assel C, Faisal A, 2020, Motor learning in real-world pool billiards, Scientific Reports, Vol: 10, Pages: 1-13, ISSN: 2045-2322

The neurobehavioral mechanisms of human motor-control and learning evolved in free behaving, real-life settings, yet this is studied mostly in reductionistic lab-based experiments. Here we take a step towards a more real-world motor neuroscience using wearables for naturalistic full-body motion-tracking and the sports of pool billiards to frame a real-world skill learning experiment. First, we asked if well-known features of motor learning in lab-based experiments generalize to a real-world task. We found similarities in many features such as multiple learning rates, and the relationship between task-related variability and motor learning. Our data-driven approach reveals the structure and complexity of movement, variability, and motor learning, enabling an in-depth understanding of the structure of motor learning in three ways: First, while expecting most of the movement learning is done by the cue-wielding arm, we find that motor learning affects the whole body, changing motor-control from head to toe. Second, during learning, all subjects decreased their movement variability and their variability in the outcome. Subjects who were initially more variable were also more variable after learning. Lastly, when screening the link across subjects between initial variability in individual joints and learning, we found that only the initial variability in the right forearm supination shows a significant correlation to the subjects’ learning rates. This is in-line with the relationship between learning and variability: while learning leads to an overall reduction in movement variability, only initial variability in specific task-relevant dimensions can facilitate faster learning.

Journal article

Patel BV, Haar S, Handslip R, Lee TM-L, Patel S, Harston JA, Hosking-Jervis F, Kelly D, Sanderson B, Bogatta B, Tatham K, Welters I, Camporota L, Gordon AC, Komorowski M, Antcliffe D, Prowle JR, Puthucheary Z, Faisal AAet al., 2020, Natural history, trajectory, and management of mechanically ventilated COVID-19 patients in the United Kingdom, Publisher: Cold Spring Harbor Laboratory

Background To date the description of mechanically ventilated patients with Coronavirus Disease 2019 (COVID-19) has focussed on admission characteristics with no consideration of the dynamic course of the disease. Here, we present a data-driven analysis of granular, daily data from a representative proportion of patients undergoing invasive mechanical ventilation (IMV) within the United Kingdom (UK) to evaluate the complete natural history of COVID-19.Methods We included adult patients undergoing IMV within 48 hours of ICU admission with complete clinical data until death or ICU discharge. We examined factors and trajectories that determined disease progression and responsiveness to ARDS interventions. Our data visualisation tool is available as a web-based widget (https://www.CovidUK.ICU).Findings Data for 623 adults with COVID-19 who were mechanically ventilated between 01 March 2020 and 31 August 2020 were analysed. Mortality, intensity of mechanical ventilation and severity of organ injury increased with severity of hypoxaemia. Median tidal volume per kg across all mandatory breaths was 5.6 [IQR 4.7-6.6] mL/kg based on reported body weight, but 7.0 [IQR 6.0-8.4] mL/kg based on calculated ideal body weight. Non-resolution of hypoxaemia over the first week of IMV was associated with higher ICU mortality (59.4% versus 16.3%; P<0.001). Of patients ventilated in prone position only 44% showed a positive oxygenation response. Non-responders to prone position show higher D-Dimers, troponin, cardiovascular SOFA, and higher ICU mortality (68.9% versus 29.7%; P<0.001). Multivariate analysis showed prone non-responsiveness being independently associated with higher lactate (hazard ratio 1.41, 95% CI 1.03–1.93), respiratory SOFA (hazard ratio 3.59, 95% CI 1.83–7.04); and cardiovascular SOFA score (hazard ratio 1.37, 95% CI 1.05–1.80).Interpretation A sizeable proportion of patients with progressive worsening of hypoxaemia were also refractory to evid

Working paper

Ortega San Miguel P, Zhao T, Faisal AA, 2020, HYGRIP: Full-stack characterisation of neurobehavioural signals (fNIRS, EEG, EMG, force and breathing) during a bimanual grip force control task, Frontiers in Neuroscience, Vol: 14, Pages: 1-10, ISSN: 1662-453X

Brain-computer interfaces (BCIs) have achieved important milestones in recent years, but the major number of breakthroughs in the continuous control of movement have focused on invasive neural interfaces with motor cortex or peripheral nerves. In contrast, non-invasive BCIs have made primarily progress in continuous decoding using event-related data, while the direct decoding of movement command or muscle force from brain data is an open challenge.Multi-modal signals from human cortex, obtained from mobile brain imaging that combines oxygenation and electrical neuronal signals, do not yet exploit their full potential due to the lack of computational techniques able to fuse and decode these hybrid measurements.To stimulate the research community and machine learning techniques closer to the state-of-the-art in artificial intelligence we release herewith a holistic data set of hybrid non-invasive measures for continuous force decoding: the Hybrid Dynamic Grip (HYGRIP) data set. We aim to provide a complete data set, that comprises the target force for the left/right hand, cortical brain signals in form of electroencephalography (EEG) with high temporal resolution and functional near-infrared spectroscopy (fNIRS) that captures in higher spatial resolution a BOLD-like cortical brain response, as well as the muscle activity (EMG) of the grip muscles, the force generated at the grip sensor (force), as well as confounding noise sources, such as breathing and eye movement activity during the task.In total, 14 right-handed subjects performed a uni-manual dynamic grip force task within $25-50\%$ of each hand's maximum voluntary contraction. HYGRIP is intended as a benchmark with two open challenges and research questions for grip-force decoding.First, the exploitation and fusion of data from brain signals spanning very different time-scales, as EEG changes about three orders of magnitude faster than fNIRS.Second, the decoding of whole-brain signals associated with the use of

Journal article

Haar Millo S, Faisal A, 2020, Brain activity reveals multiple motor-learning mechanisms in a real-world task, Frontiers in Human Neuroscience, Vol: 14, ISSN: 1662-5161

Many recent studies found signatures of motor learning in neural beta oscillations (13–30Hz), and specifically in the post-movement beta rebound (PMBR). All these studies were in controlled laboratory-tasks in which the task designed to induce the studied learning mechanism. Interestingly, these studies reported opposing dynamics of the PMBR magnitude over learning for the error-based and reward-based tasks (increase versus decrease, respectively). Here we explored the PMBR dynamics during real-world motor-skill-learning in a billiards task using mobile-brain-imaging. Our EEG recordings highlight the opposing dynamics of PMBR magnitudes (increase versus decrease) between different subjects performing the same task. The groups of subjects, defined by their neural dynamics, also showed behavioural differences expected for different learning mechanisms. Our results suggest that when faced with the complexity of the real-world different subjects might use different learning mechanisms for the same complex task. We speculate that all subjects combine multi-modal mechanisms of learning, but different subjects have different predominant learning mechanisms.

Journal article

Rito Lima I, Haar Millo S, Di Grassi L, Faisal Aet al., 2020, Neurobehavioural signatures in race car driving: a case study, Scientific Reports, Vol: 10, Pages: 1-9, ISSN: 2045-2322

Recent technological developments in mobile brain and body imaging are enabling new frontiers of real-world neuroscience. Simultaneous recordings of body movement and brain activity from highly skilled individuals as they demonstrate their exceptional skills in real-world settings, can shed new light on the neurobehavioural structure of human expertise. Driving is a real-world skill which many of us acquire to different levels of expertise. Here we ran a case-study on a subject with the highest level of driving expertise—a Formula E Champion. We studied the driver’s neural and motor patterns while he drove a sports car on the “Top Gear” race track under extreme conditions (high speed, low visibility, low temperature, wet track). His brain activity, eye movements and hand/foot movements were recorded. Brain activity in the delta, alpha, and beta frequency bands showed causal relation to hand movements. We herein demonstrate the feasibility of using mobile brain and body imaging even in very extreme conditions (race car driving) to study the sensory inputs, motor outputs, and brain states which characterise complex human skills.

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00539811&limit=30&person=true