Imperial College London

Dr A. Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

190 results found

Pieritz S, Khwaja M, Faisal A, Matic Aet al., 2021, Personalised Recommendations in Mental Health Apps: The Impact of Autonomy and Data Sharing, ACM Conference on Human Factors in Computing Systems (CHI), Publisher: ACM

The recent growth of digital interventions for mental well-being prompts a call-to-arms to explore the delivery of personalised recommendations from a user's perspective. In a randomised placebo study with a two-way factorial design, we analysed the difference between an autonomous user experience as opposed to personalised guidance, with respect to both users’ preference and their actual usage of a mental well-being app. Furthermore, we explored users’ preference in sharing their data for receiving personalised recommendations, by juxtaposing questionnaires and mobile sensor data. Interestingly, self-reported results indicate the preference for personalised guidance, whereas behavioural data suggests that a blend of autonomous choice and recommended activities results in higher engagement. Additionally, although users reported a strong preference of filling out questionnaires instead of sharing their mobile data, the data source did not have any impact on the actual app use. We discuss the implications of our findings and provide takeaways for designers of mental well-being applications.

Conference paper

Ortega San Miguel P, Faisal AA, 2021, HemCNN: Deep Learning enables decoding of fNIRS cortical signals in hand grip motor tasks, IEEE NER

Conference paper

Wei X, Ortega P, Faisal A, 2021, Inter-subject Deep Transfer Learning for Motor Imagery EEG Decoding, IEEE Neural Engineering (NER), Publisher: IEEE

Conference paper

Ortega San Miguel P, Zhao T, Faisal AA, 2021, Deep Real-Time Decoding of bimanual grip force from EEG & fNIRS, IEEE NER

Conference paper

Denghao L, Ortega San Miguel P, Faisal AA, 2021, Model-Agnostic Meta-Learning for EEG Motor Imagery Decoding in Brain-Computer-Interfacing, IEEE NER

Conference paper

Shafti SA, Faisal A, 2021, Non-invasive Cognitive-level Human Interfacing for the RoboticRestoration of Reaching & Grasping, 10th International IEEE EMBS Conference on Neural Engineering

Conference paper

Wannawas N, Subramanian M, Faisal A, 2021, Neuromechanics-based Deep Reinforcement Learning of Neurostimulation Control in FES cycling, Mahendran Subramanian

Conference paper

Subramanian M, Park S, Orlov P, Shafti A, Faisal Aet al., 2021, Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform, 10th International IEEE EMBS Conference on Neural Engineering

Conference paper

Haar Millo S, Sundar G, Faisal A, 2021, Embodied virtual reality for the study of real-world motor learning, PLoS One, ISSN: 1932-6203

Journal article

Auepanwiriyakul C, Waibel S, Songa J, Bentley P, Faisal AAet al., 2020, Accuracy and acceptability of wearable motion tracking for inpatient monitoring using smartwatches, Sensors, Vol: 20, ISSN: 1424-8220

Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), and optical motion tracking (OptiTrack). Given the moderate to strong performance of the consumer-grade sensors, we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N = 44) and staff (N = 15) following a clinical test in which patients wore smartwatches for 1.5–24 h in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple Series 5 and 3 smartwatches and Axivity AX3 (RMSE 1.66 ± 0.12 m·s−2; R2 0.78 ± 0.02; RMSE 2.29 ± 0.09 m·s−2; R2 0.56 ± 0.01; RMSE 2.14 ± 0.09 m·s−2; R2 0.49 ± 0.02; RMSE 4.12 ± 0.18 m·s−2; R2 0.34 ± 0.01 respectively). For angular velocity, Series 5 and 3 smartwatches achieved similar performances against Xsens with RMSE 0.22 ± 0.02 rad·s−1; R2 0.99 ± 0.00; and RMSE 0.18 ± 0.01 rad·s−1; R2 1.00± SE 0.00, respectively. Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long-term use, and do not cause anxiety or limit daily activities. Our results suggest that consumer smartwatches achieved moderate to strong levels of accuracy compared to laboratory gold-standard and are acceptable for pervasive monitoring of motion/behaviour within hospital settings.

Journal article

Li L, Faisal A, 2020, Bayesian distributional policy gradients, AAAI Conference on Artificial Intelligence, Publisher: AAAI

Distributional reinforcement learning (Distributional RL)maintains the entire probability distribution of the reward-to-go, i.e. the return, providing a more principled approach to account for the uncertainty associated with policy performance, which may be beneficial for trading off exploration and exploitation and policy learning in general. Previous work in distributional RL focused mainly on computing the state-action-return distributions, here we model the state-return distributions. This enables us to translate successful conventional RL algorithms that are based on state values into distributional RL. We formulate the distributional Bell-man operation as an inference-based auto-encoding process that minimises Wasserstein metrics between target/model re-turn distributions. Our algorithm, BDPG (Bayesian Distributional Policy Gradients), uses adversarial training in joint-contrastive learning to learn a variational posterior from there turns. Moreover, we can now interpret the return prediction uncertainty as an information gain, which allows to obtain anew curiosity measure that helps BDPG steer exploration actively and efficiently. In our experiments, Atari 2600 games and MuJoCo tasks, we demonstrate how BDPG learns generally faster and with higher asymptotic performance than reference distributional RL algorithms, including well known hard exploration tasks.

Conference paper

Gallego-Delgado P, James R, Browne E, Meng J, Umashankar S, Tan L, Picon C, Mazarakis ND, Faisal AA, Howell OW, Reynolds Ret al., 2020, Neuroinflammation in the normal-appearing white matter (NAWM) of the multiple sclerosis brain causes abnormalities at the nodes of Ranvier., PLoS Biology, Vol: 18, Pages: 1-36, ISSN: 1544-9173

Changes to the structure of nodes of Ranvier in the normal-appearing white matter (NAWM) of multiple sclerosis (MS) brains are associated with chronic inflammation. We show that the paranodal domains in MS NAWM are longer on average than control, with Kv1.2 channels dislocated into the paranode. These pathological features are reproduced in a model of chronic meningeal inflammation generated by the injection of lentiviral vectors for the lymphotoxin-α (LTα) and interferon-γ (IFNγ) genes. We show that tumour necrosis factor (TNF), IFNγ, and glutamate can provoke paranodal elongation in cerebellar slice cultures, which could be reversed by an N-methyl-D-aspartate (NMDA) receptor blocker. When these changes were inserted into a computational model to simulate axonal conduction, a rapid decrease in velocity was observed, reaching conduction failure in small diameter axons. We suggest that glial cells activated by pro-inflammatory cytokines can produce high levels of glutamate, which triggers paranodal pathology, contributing to axonal damage and conduction deficits.

Journal article

Auepanwiriyakul C, Waibel S, Songa J, Bentley P, Faisal AAet al., 2020, Accuracy and Acceptability of Wearable Motion Tracking Smartwatches for Inpatient Monitoring, Sensors, ISSN: 1424-8220

<jats:p>: Inertial Measurement Units (IMUs) within an everyday consumer smartwatch offer a convenient and low-cost method to monitor the natural behaviour of hospital patients. However, their accuracy at quantifying limb motion, and clinical acceptability, have not yet been demonstrated. To this end we conducted a two-stage study: First, we compared the inertial accuracy of wrist-worn IMUs, both research-grade (Xsens MTw Awinda, and Axivity AX3) and consumer-grade (Apple Watch Series 3 and 5), relative to gold-standard optical motion tracking (OptiTrack). Given the moderate to the strong performance of the consumer-grade sensors we then evaluated this sensor and surveyed the experiences and attitudes of hospital patients (N=44) and staff (N=15) following a clinical test in which patients wore smartwatches for 1.5-24 hours in the second study. Results indicate that for acceleration, Xsens is more accurate than the Apple smartwatches and Axivity AX3 (RMSE 0.17+/-0.01 g; R2 0.88+/-0.01; RMSE 0.22+/-0.01 g; R2 0.64+/-0.01; RMSE 0.42+/-0.01 g; R2 0.43+/-0.01, respectively). However, for angular velocity, the smartwatches are marginally more accurate than Xsens (RMSE 1.28+/-0.01 rad/s; R2 0.85+/-0.00; RMSE 1.37+/-0.01 rad/s; R2 0.82+/-0.01, respectively). Surveys indicated that in-patients and healthcare professionals strongly agreed that wearable motion sensors are easy to use, comfortable, unobtrusive, suitable for long term use, and do not cause anxiety or limit daily activities. Our results suggest that smartwatches achieved moderate to strong levels of accuracy compared to a gold-standard reference and are likely to be accepted as a pervasive measure of motion/behaviour within hospitals.</jats:p>

Journal article

Haar Millo S, van Assel C, Faisal A, 2020, Motor learning in real-world pool billiards, Scientific Reports, Vol: 10, Pages: 1-13, ISSN: 2045-2322

The neurobehavioral mechanisms of human motor-control and learning evolved in free behaving, real-life settings, yet this is studied mostly in reductionistic lab-based experiments. Here we take a step towards a more real-world motor neuroscience using wearables for naturalistic full-body motion-tracking and the sports of pool billiards to frame a real-world skill learning experiment. First, we asked if well-known features of motor learning in lab-based experiments generalize to a real-world task. We found similarities in many features such as multiple learning rates, and the relationship between task-related variability and motor learning. Our data-driven approach reveals the structure and complexity of movement, variability, and motor learning, enabling an in-depth understanding of the structure of motor learning in three ways: First, while expecting most of the movement learning is done by the cue-wielding arm, we find that motor learning affects the whole body, changing motor-control from head to toe. Second, during learning, all subjects decreased their movement variability and their variability in the outcome. Subjects who were initially more variable were also more variable after learning. Lastly, when screening the link across subjects between initial variability in individual joints and learning, we found that only the initial variability in the right forearm supination shows a significant correlation to the subjects’ learning rates. This is in-line with the relationship between learning and variability: while learning leads to an overall reduction in movement variability, only initial variability in specific task-relevant dimensions can facilitate faster learning.

Journal article

Patel BV, Haar S, Handslip R, Lee TM-L, Patel S, Harston JA, Hosking-Jervis F, Kelly D, Sanderson B, Bogatta B, Tatham K, Welters I, Camporota L, Gordon AC, Komorowski M, Antcliffe D, Prowle JR, Puthucheary Z, Faisal AAet al., 2020, Natural history, trajectory, and management of mechanically ventilated COVID-19 patients in the United Kingdom, Publisher: Cold Spring Harbor Laboratory

Background To date the description of mechanically ventilated patients with Coronavirus Disease 2019 (COVID-19) has focussed on admission characteristics with no consideration of the dynamic course of the disease. Here, we present a data-driven analysis of granular, daily data from a representative proportion of patients undergoing invasive mechanical ventilation (IMV) within the United Kingdom (UK) to evaluate the complete natural history of COVID-19.Methods We included adult patients undergoing IMV within 48 hours of ICU admission with complete clinical data until death or ICU discharge. We examined factors and trajectories that determined disease progression and responsiveness to ARDS interventions. Our data visualisation tool is available as a web-based widget (https://www.CovidUK.ICU).Findings Data for 623 adults with COVID-19 who were mechanically ventilated between 01 March 2020 and 31 August 2020 were analysed. Mortality, intensity of mechanical ventilation and severity of organ injury increased with severity of hypoxaemia. Median tidal volume per kg across all mandatory breaths was 5.6 [IQR 4.7-6.6] mL/kg based on reported body weight, but 7.0 [IQR 6.0-8.4] mL/kg based on calculated ideal body weight. Non-resolution of hypoxaemia over the first week of IMV was associated with higher ICU mortality (59.4% versus 16.3%; P<0.001). Of patients ventilated in prone position only 44% showed a positive oxygenation response. Non-responders to prone position show higher D-Dimers, troponin, cardiovascular SOFA, and higher ICU mortality (68.9% versus 29.7%; P<0.001). Multivariate analysis showed prone non-responsiveness being independently associated with higher lactate (hazard ratio 1.41, 95% CI 1.03–1.93), respiratory SOFA (hazard ratio 3.59, 95% CI 1.83–7.04); and cardiovascular SOFA score (hazard ratio 1.37, 95% CI 1.05–1.80).Interpretation A sizeable proportion of patients with progressive worsening of hypoxaemia were also refractory to evid

Working paper

Ortega San Miguel P, Zhao T, Faisal AA, 2020, HYGRIP: Full-stack characterisation of neurobehavioural signals (fNIRS, EEG, EMG, force and breathing) during a bimanual grip force control task, Frontiers in Neuroscience, Vol: 14, Pages: 1-10, ISSN: 1662-453X

Brain-computer interfaces (BCIs) have achieved important milestones in recent years, but the major number of breakthroughs in the continuous control of movement have focused on invasive neural interfaces with motor cortex or peripheral nerves. In contrast, non-invasive BCIs have made primarily progress in continuous decoding using event-related data, while the direct decoding of movement command or muscle force from brain data is an open challenge.Multi-modal signals from human cortex, obtained from mobile brain imaging that combines oxygenation and electrical neuronal signals, do not yet exploit their full potential due to the lack of computational techniques able to fuse and decode these hybrid measurements.To stimulate the research community and machine learning techniques closer to the state-of-the-art in artificial intelligence we release herewith a holistic data set of hybrid non-invasive measures for continuous force decoding: the Hybrid Dynamic Grip (HYGRIP) data set. We aim to provide a complete data set, that comprises the target force for the left/right hand, cortical brain signals in form of electroencephalography (EEG) with high temporal resolution and functional near-infrared spectroscopy (fNIRS) that captures in higher spatial resolution a BOLD-like cortical brain response, as well as the muscle activity (EMG) of the grip muscles, the force generated at the grip sensor (force), as well as confounding noise sources, such as breathing and eye movement activity during the task.In total, 14 right-handed subjects performed a uni-manual dynamic grip force task within $25-50\%$ of each hand's maximum voluntary contraction. HYGRIP is intended as a benchmark with two open challenges and research questions for grip-force decoding.First, the exploitation and fusion of data from brain signals spanning very different time-scales, as EEG changes about three orders of magnitude faster than fNIRS.Second, the decoding of whole-brain signals associated with the use of

Journal article

Haar Millo S, Faisal A, 2020, Brain activity reveals multiple motor-learning mechanisms in a real-world task, Frontiers in Human Neuroscience, Vol: 14, ISSN: 1662-5161

Many recent studies found signatures of motor learning in neural beta oscillations (13–30Hz), and specifically in the post-movement beta rebound (PMBR). All these studies were in controlled laboratory-tasks in which the task designed to induce the studied learning mechanism. Interestingly, these studies reported opposing dynamics of the PMBR magnitude over learning for the error-based and reward-based tasks (increase versus decrease, respectively). Here we explored the PMBR dynamics during real-world motor-skill-learning in a billiards task using mobile-brain-imaging. Our EEG recordings highlight the opposing dynamics of PMBR magnitudes (increase versus decrease) between different subjects performing the same task. The groups of subjects, defined by their neural dynamics, also showed behavioural differences expected for different learning mechanisms. Our results suggest that when faced with the complexity of the real-world different subjects might use different learning mechanisms for the same complex task. We speculate that all subjects combine multi-modal mechanisms of learning, but different subjects have different predominant learning mechanisms.

Journal article

Rito Lima I, Haar Millo S, Di Grassi L, Faisal Aet al., 2020, Neurobehavioural signatures in race car driving: a case study, Scientific Reports, Vol: 10, Pages: 1-9, ISSN: 2045-2322

Recent technological developments in mobile brain and body imaging are enabling new frontiers of real-world neuroscience. Simultaneous recordings of body movement and brain activity from highly skilled individuals as they demonstrate their exceptional skills in real-world settings, can shed new light on the neurobehavioural structure of human expertise. Driving is a real-world skill which many of us acquire to different levels of expertise. Here we ran a case-study on a subject with the highest level of driving expertise—a Formula E Champion. We studied the driver’s neural and motor patterns while he drove a sports car on the “Top Gear” race track under extreme conditions (high speed, low visibility, low temperature, wet track). His brain activity, eye movements and hand/foot movements were recorded. Brain activity in the delta, alpha, and beta frequency bands showed causal relation to hand movements. We herein demonstrate the feasibility of using mobile brain and body imaging even in very extreme conditions (race car driving) to study the sensory inputs, motor outputs, and brain states which characterise complex human skills.

Journal article

Shafti SA, Tjomsland J, Dudley W, Faisal Aet al., 2020, Real-world human-robot collaborative reinforcement learning, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, ISSN: 2153-0866

The intuitive collaboration of humans and intel-ligent robots (embodied AI) in the real-world is an essentialobjective for many desirable applications of robotics. Whilstthere is much research regarding explicit communication, wefocus on how humans and robots interact implicitly, on motoradaptation level. We present a real-world setup of a human-robot collaborative maze game, designed to be non-trivial andonly solvable through collaboration, by limiting the actions torotations of two orthogonal axes, and assigning each axes to oneplayer. This results in neither the human nor the agent beingable to solve the game on their own. We use deep reinforcementlearning for the control of the robotic agent, and achieve resultswithin 30 minutes of real-world play, without any type ofpre-training. We then use this setup to perform systematicexperiments on human/agent behaviour and adaptation whenco-learning a policy for the collaborative game. We presentresults on how co-policy learning occurs over time between thehuman and the robotic agent resulting in each participant’sagent serving as a representation of how they would play thegame. This allows us to relate a person’s success when playingwith different agents than their own, by comparing the policyof the agent with that of their own agent.

Conference paper

Shafti A, Haar S, Zaldivar RM, Guilleminot P, Faisal AAet al., 2020, Learning to play the piano with the Supernumerary Robotic 3rd Thumb, Publisher: Cold Spring Harbor Laboratory

We wanted to study the ability of our brains and bodies to be augmented by supernumerary robot limbs, here extra fingers. We developed a mechanically highly functional supernumerary robotic 3rd thumb actuator, the SR3T, and interfaced it with human users enabling them to play the piano with 11 fingers. We devised a set of measurement protocols and behavioural “biomarkers”, the Human Augmentation Motor Coordination Assessment (HAMCA), which allowed us a priori to predict how well each individual human user could, after training, play the piano with a two-thumbs-hand. To evaluate augmented music playing ability we devised a simple musical score, as well as metrics for assessing the accuracy of playing the score. We evaluated the SR3T (supernumerary robotic 3rd thumb) on 12 human subjects including 6 naïve and 6 experienced piano players. We demonstrated that humans can learn to play the piano with a 6-fingered hand within one hour of training. For each subject we could predict individually, based solely on their HAMCA performance before training, how well they were able to perform with the extra robotic thumb, after training (training end-point performance). Our work demonstrates the feasibility of robotic human augmentation with supernumerary robotic limbs within short time scales. We show how linking the neuroscience of motor learning with dexterous robotics and human-robot interfacing can be used to inform a priori how far individual motor impaired patients or healthy manual workers could benefit from robotic augmentation solutions.

Working paper

Albert-Smet I, McPherson D, Navaie W, Stocker T, Faisal AAet al., 2020, Regulations & exemptions during the COVID-19 pandemic for new medical technology, health services & data

The rapid evolution of the COVID-19 pandemic has sparked a large unmet need for new or additional medical technology and healthcare services to be made available urgently. Healthcare, Academic, Government and Industry organizations and individuals have risen to this challenge by designing, developing, manufacturing or implementing innovation. However, both they and healthcare stakeholders are hampered as it is unclear how to introduce and deploy the products of this innovation quickly and legally within the healthcare system. Our paper outlines the key regulations and processes innovators need to comply with, and how these change during a public health emergency via dedicated exemptions. Our work includes references to the formal documents regarding UK healthcare regulation and governance, and is meant to serve as a guide for those who wish to act quickly but are uncertain of the legal and regulatory pathways that allow new a device or service to be fast-tracked.

Report

Haar S, Sundar G, Faisal A, 2020, Embodied virtual reality for the study of real-world motor learning, Publisher: bioRxiv

Abstract Background The motor learning literature focuses on relatively simple laboratory-tasks due to their highly controlled manner and the ease to apply different manipulations to induce learning and adaptation. In recent work we introduced a billiards paradigm and demonstrated the feasibility of real-world neuroscience using wearables for naturalistic full-body motion tracking and mobile brain imaging. Here we developed an embodied virtual reality (VR) environment to our real-world billiards paradigm, which allows us to control the visual feedback for this complex real-world task, while maintaining the sense of embodiment. Methods The setup was validated by comparing real-world ball trajectories with the embodied VR trajectories, calculated by the physics engine. We then ran our real-world learning protocol in the embodied VR. 10 healthy human subjects played repeated trials of the same billiard shot when they held the physical cue and hit a physical ball on the table while seeing it all in VR. Results We found comparable learning trends in the embodied VR to those we previously reported in the real-world task. Conclusions Embodied VR can be used for learning real-world tasks in a highly controlled VR environment which enables applying visual manipulations, common in laboratory-tasks and in rehabilitation, to a real-world full-body task. Such a setup can be used for rehabilitation, where the use of VR is gaining popularity but the transfer to the real-world is currently limited, presumably, due to the lack of embodiment. The embodied VR enables to manipulate feedback and apply perturbations to isolate and assess interactions between specific motor learning components mechanisms, thus enabling addressing the current questions of motor-learning in real-world tasks.

Working paper

Haar S, Faisal A, 2020, Neural biomarkers of multiple motor-learning mechanisms in a real-world task, Publisher: bioRxiv

Abstract Many recent studies found signatures of motor learning in neural beta oscillations (13–30Hz), and specifically in the post-movement beta rebound (PMBR). All these studies were in simplified laboratory-tasks in which learning was either error-based or reward-based. Interestingly, these studies reported opposing dynamics of the PMBR magnitude over learning for the error-based and reward-based tasks (increase verses decrease, respectively). Here we explored the PMBR dynamics during real-world motor-skill-learning in a billiards task using mobile-brain-imaging. Our EEG recordings highlight opposing dynamics of PMBR magnitudes between different subjects performing the same task. The groups of subjects, defined by their neural-dynamics, also showed behavioral differences expected for error-based verses reward-based learning. Our results suggest that when faced with the complexity of the real-world different subjects might use different learning mechanisms for the same complex task. We speculate that all subjects combine multi-modal mechanisms of learning, but different subjects have different predominant learning mechanisms.

Working paper

Shafti A, Tjomsland J, Dudley W, Faisal AAet al., 2020, Real-world human-robot collaborative reinforcement learning, Publisher: arXiv

The intuitive collaboration of humans and intelligent robots (embodied AI) inthe real-world is an essential objective for many desirable applications ofrobotics. Whilst there is much research regarding explicit communication, wefocus on how humans and robots interact implicitly, on motor adaptation level.We present a real-world setup of a human-robot collaborative maze game,designed to be non-trivial and only solvable through collaboration, by limitingthe actions to rotations of two orthogonal axes, and assigning each axes to oneplayer. This results in neither the human nor the agent being able to solve thegame on their own. We use a state-of-the-art reinforcement learning algorithmfor the robotic agent, and achieve results within 30 minutes of real-worldplay, without any type of pre-training. We then use this system to performsystematic experiments on human/agent behaviour and adaptation when co-learninga policy for the collaborative game. We present results on how co-policylearning occurs over time between the human and the robotic agent resulting ineach participant's agent serving as a representation of how they would play thegame. This allows us to relate a person's success when playing with differentagents than their own, by comparing the policy of the agent with that of theirown agent.

Working paper

Bachtiger P, Plymen CM, Pabari PA, Howard JP, Whinnett ZI, Opoku F, Janering S, Faisal AA, Francis DP, Peters NSet al., 2020, Artificial intelligence, data sensors and interconnectivity: future Opportunities for heart failure, Cardiac Failure Review, Vol: 6, Pages: e11-e11, ISSN: 2057-7540

A higher proportion of patients with heart failure have benefitted from a wide and expanding variety of sensor-enabled implantable devices than any other patient group. These patients can now also take advantage of the ever-increasing availability and affordability of consumer electronics. Wearable, on- and near-body sensor technologies, much like implantable devices, generate massive amounts of data. The connectivity of all these devices has created opportunities for pooling data from multiple sensors - so-called interconnectivity - and for artificial intelligence to provide new diagnostic, triage, risk-stratification and disease management insights for the delivery of better, more personalised and cost-effective healthcare. Artificial intelligence is also bringing important and previously inaccessible insights from our conventional cardiac investigations. The aim of this article is to review the convergence of artificial intelligence, sensor technologies and interconnectivity and the way in which this combination is set to change the care of patients with heart failure.

Journal article

Abbott W, Harston J, Faisal A, 2020, Linear Embodied Saliency: a Model of Full-Body Kinematics-based Visual Attention, bioRxiv

Linear Embodied Saliency: a Model of Full-Body Kinematics-based Visual Attention

Journal article

Deisenroth MP, Faisal AA, Ong CS, 2020, Mathematics for Machine Learning, Publisher: Cambridge University Press, ISBN: 9781108455145

Book

Beyret B, Shafti A, Faisal AA, 2020, Dot-to-Dot: Explainable Hierarchical Reinforcement Learning for Robotic Manipulation, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 5014-5019, ISSN: 2153-0858

Conference paper

Beyret B, Shafti SA, Faisal A, 2020, Dot-to-dot: explainable hierarchical reinforcement learning for robotic manipulation, IEEE/RSJ International Conference on Intelligent Robots and Systems. IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 1-6, ISSN: 2153-0866

Robotic systems are ever more capable of automationand fulfilment of complex tasks, particularly withreliance on recent advances in intelligent systems, deep learningand artificial intelligence in general. However, as robots andhumans come closer together in their interactions, the matterof interpretability, or explainability of robot decision-makingprocesses for the human grows in importance. A successfulinteraction and collaboration would only be possible throughmutual understanding of underlying representations of theenvironment and the task at hand. This is currently a challengein deep learning systems. We present a hierarchical deepreinforcement learning system, consisting of a low-level agenthandling the large actions/states space of a robotic systemefficiently, by following the directives of a high-level agent whichis learning the high-level dynamics of the environment and task.This high-level agent forms a representation of the world andtask at hand that is interpretable for a human operator. Themethod, which we call Dot-to-Dot, is tested on a MuJoCo-basedmodel of the Fetch Robotics Manipulator, as well as a ShadowHand, to test its performance. Results show efficient learningof complex actions/states spaces by the low-level agent, and aninterpretable representation of the task and decision-makingprocess learned by the high-level agent.

Conference paper

Lima IR, Haar S, Di Grassi L, Faisal Aet al., 2019, Neurobehavioural signatures in race car driving

ABSTRACT Recent technological developments in mobile brain and body imaging are enabling new frontiers of real-world neuroscience. Simultaneous recordings of body movement and brain activity from highly skillful individuals as they demonstrate their exceptional skills in real-world settings, can shed new light on neurobehavioural structure of human expertise. Driving is a real-world skill which many of us acquire on different levels of expertise. Here we ran a case-study on a subject with the highest level of driving expertise - a Formula E Champion. We studied the expert driver’s neural and motor patterns while he drove a sports car in the “Top Gear” race track under extreme conditions (high speed, low visibility, low temperature, wet track). His brain activity, eye movements and hand/foot movements were recorded. Brain activity in the delta, alpha, and beta frequency bands showed causal relation to hand movements. We demonstrate, here in summary, that even in extreme situations (race track driving) a method for conducting human ethomic (Ethology + Omics) data that encompasses information on the sensory inputs and motor outputs outputs of the brain as well as brain state to characterise complex human skills.

Working paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00539811&limit=30&person=true