Imperial College London

Dr A. Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

222 results found

Festor P, Jia Y, Gordon A, Faisal A, Habil I, Komorowski Met al., 2022, Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment, BMJ Health & Care Informatics, ISSN: 2632-1009

Study objectives: Establishing confidence in the safety of AI-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. Methods: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios and created safety constraints, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions.Results: Using a subset of the MIMIC-III database, we demonstrated that our previously published “AI Clinician” recommended fewer hazardous decisions than human clinicians in three out of our four pre-defined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance.Discussion: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data was curated to limit the impact of this confounder.Conclusion: These advances provide a use case for the systematic safety assurance of AI-based clinical systems, towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.

Journal article

Currie SP, Ammer JJ, Premchand B, Dacre J, Wu Y, Eleftheriou C, Colligan M, Clarke T, Mitchell L, Faisal AA, Hennig MH, Duguid Iet al., 2022, Movement-specific signaling is differentially distributed across motor cortex layer 5 projection neuron classes, CELL REPORTS, Vol: 39, ISSN: 2211-1247

Journal article

Liu Y, Leib R, Dudley W, Shafti A, Faisal AA, Franklin DWet al., 2022, The role of haptic communication in dyadic collaborative object manipulation tasks, Publisher: arXiv

Intuitive and efficient physical human-robot collaboration relies on themutual observability of the human and the robot, i.e. the two entities beingable to interpret each other's intentions and actions. This is remedied by amyriad of methods involving human sensing or intention decoding, as well ashuman-robot turn-taking and sequential task planning. However, the physicalinteraction establishes a rich channel of communication through forces, torquesand haptics in general, which is often overlooked in industrial implementationsof human-robot interaction. In this work, we investigate the role of haptics inhuman collaborative physical tasks, to identify how to integrate physicalcommunication in human-robot teams. We present a task to balance a ball at atarget position on a board either bimanually by one participant, or dyadicallyby two participants, with and without haptic information. The task requiresthat the two sides coordinate with each other, in real-time, to balance theball at the target. We found that with training the completion time and numberof velocity peaks of the ball decreased, and that participants gradually becameconsistent in their braking strategy. Moreover we found that the presence ofhaptic information improved the performance (decreased completion time) and ledto an increase in overall cooperative movements. Overall, our results show thathumans can better coordinate with one another when haptic feedback isavailable. These results also highlight the likely importance of hapticcommunication in human-robot physical interaction, both as a tool to inferhuman intentions and to make the robot behaviour interpretable to humans.

Working paper

Kadirvelu B, Burcea G, Costello C, Quint J, Faisal Aet al., 2022, Variation in global COVID-19 symptoms by geography and by chronic disease: a global survey using the COVID-19 Symptom Mapper, EClinicalMedicine, Vol: 45, Pages: 1-15, ISSN: 2589-5370

BackgroundCOVID-19 is typically characterised by a triad of symptoms: cough, fever and loss of taste and smell, however, this varies globally. This study examines variations in COVID-19 symptom profiles based on underlying chronic disease and geographical location.MethodsUsing a global online symptom survey of 78,299 responders in 190 countries between 09/04/2020 and 22/09/2020, we conducted an exploratory study to examine symptom profiles associated with a positive COVID-19 test result by country and underlying chronic disease (single, co- or multi-morbidities) using statistical and machine learning methods.FindingsFrom the results of 7980 COVID-19 tested positive responders, we find that symptom patterns differ by country. For example, India reported a lower proportion of headache (22.8% vs 47.8%, p<1e-13) and itchy eyes (7.3% vs. 16.5%, p=2e-8) than other countries. As with geographic location, we find people differed in their reported symptoms if they suffered from specific chronic diseases. For example, COVID-19 positive responders with asthma (25.3% vs. 13.7%, p=7e-6) were more likely to report shortness of breath compared to those with no underlying chronic disease. InterpretationWe have identified variation in COVID-19 symptom profiles depending on geographic location and underlying chronic disease. Failure to reflect this symptom variation in public health messaging may contribute to asymptomatic COVID-19 spread and put patients with chronic diseases at a greater risk of infection. Future work should focus on symptom profile variation in the emerging variants of the SARS-CoV-2 virus. This is crucial to speed up clinical diagnosis, predict prognostic outcomes and target treatment.

Journal article

Shafti A, Derks V, Kay H, Faisal AAet al., 2022, The response shift paradigm to quantify human trust in AI recommendations, Publisher: arXiv

Explainability, interpretability and how much they affect human trust in AIsystems are ultimately problems of human cognition as much as machine learning,yet the effectiveness of AI recommendations and the trust afforded by end-usersare typically not evaluated quantitatively. We developed and validated ageneral purpose Human-AI interaction paradigm which quantifies the impact of AIrecommendations on human decisions. In our paradigm we confronted human userswith quantitative prediction tasks: asking them for a first response, beforeconfronting them with an AI's recommendations (and explanation), and thenasking the human user to provide an updated final response. The differencebetween final and first responses constitutes the shift or sway in the humandecision which we use as metric of the AI's recommendation impact on the human,representing the trust they place on the AI. We evaluated this paradigm onhundreds of users through Amazon Mechanical Turk using a multi-branchedexperiment confronting users with good/poor AI systems that had good, poor orno explainability. Our proof-of-principle paradigm allows one to quantitativelycompare the rapidly growing set of XAI/IAI approaches in terms of their effecton the end-user and opens up the possibility of (machine) learning trust.

Working paper

Festor P, Shafti A, Harston A, Li M, Orlov P, Faisal AAet al., 2022, MIDAS: Deep learning human action intention prediction from natural eye movement patterns, Publisher: arXiv

Eye movements have long been studied as a window into the attentionalmechanisms of the human brain and made accessible as novelty stylehuman-machine interfaces. However, not everything that we gaze upon, issomething we want to interact with; this is known as the Midas Touch problemfor gaze interfaces. To overcome the Midas Touch problem, present interfacestend not to rely on natural gaze cues, but rather use dwell time or gazegestures. Here we present an entirely data-driven approach to decode humanintention for object manipulation tasks based solely on natural gaze cues. Werun data collection experiments where 16 participants are given manipulationand inspection tasks to be performed on various objects on a table in front ofthem. The subjects' eye movements are recorded using wearable eye-trackersallowing the participants to freely move their head and gaze upon the scene. Weuse our Semantic Fovea, a convolutional neural network model to obtain theobjects in the scene and their relation to gaze traces at every frame. We thenevaluate the data and examine several ways to model the classification task forintention prediction. Our evaluation shows that intention prediction is not anaive result of the data, but rather relies on non-linear temporal processingof gaze cues. We model the task as a time series classification problem anddesign a bidirectional Long-Short-Term-Memory (LSTM) network architecture todecode intentions. Our results show that we can decode human intention ofmotion purely from natural gaze cues and object relative position, with$91.9\%$ accuracy. Our work demonstrates the feasibility of natural gaze as aZero-UI interface for human-machine interaction, i.e., users will only need toact naturally, and do not need to interact with the interface itself or deviatefrom their natural eye movement patterns.

Working paper

Harston JA, Faisal AA, 2022, Methods and Models of Eye-Tracking in Natural Environments, Neuromethods, Pages: 49-68

Mobile head-free eye-tracking is one of the most valuable methods we have in vision science for understanding the distribution and dynamics of attention in natural real-world tasks. However, mobile eye-tracking is still a somewhat nascent field, and experimental setups with such devices are not yet fully mature enough for consistently reliable investigation of real-world gaze behavior. Here, we review the development of eye-trackers from their inception to the current state of the art and discuss the experimental methodologies and technologies one can use to investigate natural goal-directed real-world gaze behavior in fully ecological experimental setups. We subsequently expand on the experimental approaches to discuss the modelling approaches used in the field with eye-tracking data, from conventional 2D saliency modelling to more fully embodied gaze approaches that incorporate gaze and motor behavior, which allow us to predict gaze dynamics in fully head-free experimental setups.

Book chapter

Subramanian M, Faisal AA, 2022, Natural Gaze Informatics: Toward Intelligence Assisted Wheelchair Mobility, Neuromethods, Pages: 97-112

Controllers such as the joystick, sip “n” puff, and head mount (gyro control) in the electric wheelchair cannot accustom to people with severe disabilities. Artificial intelligence (AI) mediated mobility could be a suitable solution, but the human gaze is rarely included in the loop. This chapter focuses on natural gaze informatics for intelligence assisted wheelchair mobility. Based on the natural gaze behavior during joystick-controlled wheelchair navigation in different scenarios, we developed two decoders for natural gaze-based powered wheelchair control. “Semantic Empirical Bayesian Decoder” was found to be most intuitive and user-friendly especially, while navigating in more complex environments. Control of backward motion, one of the major drawbacks of our previous “Look Where You Want To Go” interface, was solved using the “Semantic Empirical Bayesian Decoder.” The “Continuous Control Field Decoder” was preferred when the users had to follow a straight trajectory. Furthermore, we have harvested natural gaze informatics to decode user’s driving intention, which in fusion with an autonomous wheelchair platform allows users to accomplish their routing and steering requirement, thus enabling a natural gaze data-driven powered wheelchair. The human-in-the-loop approach combined with AI systems allows subjects to navigate their wheelchair without the need to interact with a controller in dynamic urban environments. Our natural gaze informatics-based AI control modules provide a new approach to enable wheelchair users to navigate indoors, and we aim to provide an independent urban continuum to the severely disabled and the elderly.

Book chapter

Blum KP, Grogan M, Wu Y, Harston JA, Miller LE, Faisal AAet al., 2021, Predicting proprioceptive cortical anatomy and neural coding with topographic autoencoders

<jats:p>Proprioception is one of the least understood senses yet fundamental for the control of movement. Even basic questions of how limb pose is represented in the somatosensory cortex are unclear. We developed a variational autoencoder with topographic lateral connectivity (topo-VAE) to compute a putative cortical map from a large set of natural movement data. Although not fitted to neural data, our model reproduces two sets of observations from monkey centre-out reaching: 1. The shape and velocity dependence of proprioceptive receptive fields in hand-centered coordinates despite the model having no knowledge of arm kinematics or hand coordinate systems. 2. The distribution of neuronal preferred directions (PDs) recorded from multi-electrode arrays. The model makes several testable predictions: 1. Encoding across the cortex has a blob-and-pinwheel-type geometry PDs. 2. Few neurons will encode just a single joint. Topo-VAE provides a principled basis for understanding of sensorimotor representations, and the theoretical basis of neural manifolds, with applications to the restoration of sensory feedback in brain-computer interfaces and the control of humanoid robots.</jats:p>

Journal article

Shafti SA, Haar Millo S, Mio Zaldivar R, Guilleminot P, Faisal Aet al., 2021, Playing the piano with a robotic third thumb: Assessing constraints of human augmentation, Scientific Reports, Vol: 11, Pages: 1-14, ISSN: 2045-2322

Contemporary robotics gives us mechatronic capabilities for augmenting human bodies with extra limbs. However, how our motor control capabilities pose limits on such augmentation is an open question. We developed a Supernumerary Robotic 3rd Thumbs (SR3T) with two degrees-of-freedom controlled by the user’s body to endow them with an extra contralateral thumb on the hand. We demonstrate that a pianist can learn to play the piano with 11 fingers within an hour. We then evaluate 6 naïve and 6 experienced piano players in their prior motor coordination and their capability in piano playing with the robotic augmentation. We show that individuals’ augmented performance with the SR3T could be explained by our new custom motor coordination assessment, the Human Augmentation Motor Coordination Assessment (HAMCA) performed pre-augmentation. Our work demonstrates how supernumerary robotics can augment humans in skilled tasks and that individual differences in their augmentation capability are explainable by their individual motor coordination abilities.

Journal article

Maimon-Mor RO, Schone HR, Henderson Slater D, Faisal AA, Makin TRet al., 2021, Early life experience sets hard limits on motor learning as evidenced from artificial arm use, ELIFE, Vol: 10, ISSN: 2050-084X

Journal article

Parkin B, Daws R, Das Neves I, Violante I, Soreq E, Faisal A, Sandrone S, Lao-Kaim N, Martin-Bastida A, Roussakis A-A, Piccini P, Hampshire Aet al., 2021, Dissociable effects of age and Parkinson's disease on instruction based learning, Brain Communications, Vol: 3, ISSN: 2632-1297

The cognitive deficits associated with Parkinson’s disease vary across individuals and change across time, with implications for prognosis and treatment. Key outstanding challenges are to define the distinct behavioural characteristics of this disorder and develop diagnostic paradigms that can assess these sensitively in individuals. In a previous study, we measured different aspects of attentional control in Parkinson’s disease using an established fMRI switching paradigm. We observed no deficits for the aspects of attention the task was designed to examine; instead those with Parkinson’s disease learnt the operational requirements of the task more slowly. We hypothesized that a subset of people with early-to-mid stage Parkinson’s might be impaired when encoding rules for performing new tasks. Here, we directly test this hypothesis and investigate whether deficits in instruction-based learning represent a characteristic of Parkinson’s Disease. Seventeen participants with Parkinson’s disease (8 male; mean age: 61.2 years), 18 older adults (8 male; mean age: 61.3 years) and 20 younger adults (10 males; mean age: 26.7 years) undertook a simple instruction-based learning paradigm in the MRI scanner. They sorted sequences of coloured shapes according to binary discrimination rules that were updated at two-minute intervals. Unlike common reinforcement learning tasks, the rules were unambiguous, being explicitly presented; consequently, there was no requirement to monitor feedback or estimate contingencies. Despite its simplicity, a third of the Parkinson’s group, but only one older adult, showed marked increases in errors, 4 SD greater than the worst performing young adult. The pattern of errors was consistent, reflecting a tendency to misbind discrimination rules. The misbinding behaviour was coupled with reduced frontal, parietal and anterior caudate activity when rules were being encoded, but not when attention was initially o

Journal article

Ortega P, Faisal A, 2021, Deep Learning multimodal fNIRS & EEG signals for bimanual grip force decoding, Journal of Neural Engineering, Vol: 18, Pages: 1-21, ISSN: 1741-2560

Objective Non-invasive BMI offer an alternative, safe and accessible way to interact with the environment. To enable meaningful and stable physical interactions, BMIs need to decode forces. Although previously addressed in the unimanual case, controlling forces from both hands would enable BMI-users to perform a greater range of interactions. We here investigate the decoding of hand-specific forces. Approach We maximise cortical information by using EEG and fNIRS and developing a deep-learning architecture with attention and residual layers (cnnatt) to improve their fusion. Our task required participants to generate hand-specific force profiles on which we trained and tested our deep-learning and linear decoders. Main results The use of EEG and fNIRS improved the decoding of bimanual force and the deep-learning models outperformed the linear model. In both cases, the greatest gain in performance was due to the detection of force generation. In particular, the detection of forces was hand-specific and better for the right dominant hand and cnnatt was better at fusing EEG and fNIRS. Consequently, the study of cnnatt revealed that forces from each hand were differently encoded at the cortical level. cnnatt also revealed traces of the cortical activity being modulated by the level of force which was not previously found using linear models. Significance Our results can be applied to avoid hand-cross talk during hand force decoding to increase the robustness of BMI robotic devices. In particular, we improve the fusion of EEG and fNIRS signals and offer hand-specific interpretability of the encoded forces which are valuable during motor rehabilitation assessment.

Journal article

Festor P, Habil I, Jia Y, Gordon A, Faisal A, Komorowski Met al., 2021, Levels of Autonomy & Safety Assurance forAI-based Clinical Decision Systems, WAISE 2021 : 4th International Workshop on Artificial Intelligence Safety Engineering

Conference paper

Faisal A, Kadirvelu B, Gavriel C, Nageshwaran S, Chan PKJ, Athanasopoulos S, Giunti P, Ricotti V, Voit T, Festenstein Ret al., 2021, Data-derived wearable digital biomarkers predict Frataxin gene expression levels and longitudinal disease progression in Friedreich’s Ataxia

<jats:title>Abstract</jats:title> <jats:p>Friedreich’s ataxia (FA) is a neurodegenerative disease caused by the epigenetic repression of the <jats:italic>Frataxin</jats:italic> gene modulating mitochondrial activity in the brain, which has a diffuse phenotypic impact on patients’ motor behavior. Therefore, with current gold-standard clinical scales, it requires 18–24 month-long clinical trials to determine if disease-modifying therapies are at all beneficial. Our high-performance monitoring approach captures the full-movement kinematics from human subjects using wearable body sensor networks from a cohort of FA patients during their regular clinical visits. We then use artificial intelligence to convert these movement data using universal behavior fingerprints into a digital biomarker of disease state. This enables us to predict two different ‘gold-standard’ clinical scores (SCAFI, SARA) that serve as primary clinical endpoints. Crucially, by performing gene expression analysis on each patient their personal <jats:italic>Frataxin</jats:italic> gene expression levels were poorly, if at all, correlated with their clinical scores – fundamentally failing to establish a link between disease mechanism (dysregulated gene expression) and measures to quantify it in the behavioral phenotype. In contrast, our wearable digital biomarker can accurately predict for each patient their personal <jats:italic>FXN</jats:italic> gene expression levels, demonstrating the sensitivity of our approach and the importance of FXN levels in FA. Therefore, our data-derived biomarker approach can not only cross-sectionally predict disease and their gene expression levels but also their longitudinal disease trajectory: it is sensitive and accurate enough to detect disease progression with much fewer subjects or shorter time scales than existing primary endpoints. Our work demonstrates that data-derive

Journal article

Maimon-Mor RO, Schone HR, Henderson Slater D, Faisal AA, Makin TRet al., 2021, Author response: Early life experience sets hard limits on motor learning as evidenced from artificial arm use

Journal article

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Reinforcement Learning (RL) is emerging as toolfor tackling complex control and decision-makingproblems. However, in high-risk environmentssuch as healthcare, manufacturing, automotive oraerospace, it is often challenging to bridge the gapbetween an apparently optimal policy learned byan agent and its real-world deployment, due to theuncertainties and risk associated with it. Broadlyspeaking RL agents face two kinds of uncertainty,1. aleatoric uncertainty, which reflects randomness or noise in the dynamics of the world, and 2.epistemic uncertainty, which reflects the boundedknowledge of the agent due to model limitationsand finite amount of information/data the agenthas acquired about the world. These two typesof uncertainty carry fundamentally different implications for the evaluation of performance andthe level of risk or trust. Yet these aleatoric andepistemic uncertainties are generally confoundedas standard and even distributional RL is agnosticto this difference. Here we propose how a distributional approach (UA-DQN) can be recast torender uncertainties by decomposing the net effects of each uncertainty . We demonstrate theoperation of this method in grid world examplesto build intuition and then show a proof of concept application for an RL agent operating as aclinical decision support system in critical care.

Conference paper

Festor P, Luise G, Komorowski M, Faisal Aet al., 2021, ICML2021 workshop on Interpretable Machine Learning in Healthcare, ICML2021 workshop on Interpretable Machine Learning in Healthcare

Conference paper

Stout D, Chaminade T, Apel J, Shafti A, Faisal AAet al., 2021, The measurement, evolution, and neural representation of action grammars of human behavior, Scientific Reports, Vol: 11, Pages: 1-13, ISSN: 2045-2322

Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same “alphabet” of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition.

Journal article

Wu Y, Haar S, Faisal A, 2021, Reproducing Human Motor Adaptation in Spiking Neural Simulation and known Synaptic Learning Rules

<jats:title>Abstract</jats:title><jats:p>Sensorimotor adaptation enables us to adjust our goal-oriented movements in response to external perturbations. These phenomena have been studied experimentally and computationally at the level of human and animals reaching movements, and have clear links to the cerebellum as evidenced by cerebellar lesions and neurodegeneration. Yet, despite our macroscopic understanding of the high-level computational mechanisms it is unclear how these are mapped and are implemented in the neural substrates of the cerebellum at a cellular-computational level. We present here a novel spiking neural circuit model of the sensorimotor system including a cerebellum which control physiological muscle models to reproduce behaviour experiments. Our cerebellar model is composed of spiking neuron populations reflecting cells in the cerebellar cortex and deep cerebellar nuclei, which generate motor correction to change behaviour in response to perturbations. The model proposes two learning mechanisms for adaptation: predictive learning and memory formation, which are implemented with synaptic updating rules. Our model is tested in a force-field sensorimotor adaptation task and successfully reproduce several phenomena arising from human adaptation, including well-known learning curves, aftereffects, savings and other multi-rate learning effects. This reveals the capability of our model to learn from perturbations and generate motor corrections while providing a bottom-up view for the neural basis of adaptation. Thus, it also shows the potential to predict how patients with specific types of cerebellar damage will perform in behavioural experiments. We explore this by <jats:italic>in silico</jats:italic> experiments where we selectively incapacitate selected cerebellar circuits of the model which generate and reproduce defined motor learning deficits.</jats:p><jats:sec><jats:title>Author summary</jats:t

Journal article

Khwaja M, Pieritz S, Faisal AA, Matic Aet al., 2021, Personality and engagement with digital mental health interventions, Pages: 235-239

Personalisation is key to creating successful digital health applications. Recent evidence links personality and preference for digital experience - suggesting that psychometric traits can be a promising basis for personalisation of digital mental health services. However, there is still little quantitative evidence from actual app usage. In this study, we explore how different personality types engage with different intervention content in a commercial mental health application. Specifically, we collected the Big Five personality traits alongside the app usage data of 126 participants using a mobile mental health app for seven days. We found that personality traits significantly correlate with the engagement and user ratings of different intervention content. These findings open a promising research avenue that can inform the personalised delivery of digital mental health content and the creation of recommender systems, ultimately improving the effectiveness of mental health interventions.

Conference paper

Wei X, Ortega P, Faisal A, 2021, Inter-subject deep transfer learning for motor imagery EEG decoding, 10th International IEEE EMBS Conference on Neural Engineering (NER 21), Publisher: IEEE, Pages: 1-4

Convolutional neural networks (CNNs) have be-come a powerful technique to decode EEG and have become the benchmark for motor imagery EEG Brain-Computer-Interface (BCI) decoding. However, it is still challenging to train CNNs on multiple subjects’ EEG without decreasing individual performance. This is known as the negative transfer problem, i.e. learning from dissimilar distributions causes CNNs to misrepresent each of them instead of learning a richer representation. As a result, CNNs cannot directly use multiple subjects’ EEG to enhance model performance directly. To address this problem, we extend deep transfer learning techniques to the EEG multi-subject training case. We propose a multi-branch deep transfer network, the Separate-Common-Separate Network (SCSN) based on splitting the network’s feature extractors for individual subjects. We also explore the possibility of applying Maximum-mean discrepancy (MMD) to the SCSN (SCSN-MMD) to better align distributions of features from individual feature extractors. The proposed network is evaluated on the BCI Competition IV 2a dataset (BCICIV2adataset) and our online recorded dataset. Results show that the proposed SCSN (81.8%, 53.2%) and SCSN-MMD (81.8%,54.8%) outperformed the benchmark CNN (73.4%, 48.8%) on both datasets using multiple subjects. Our proposed networks show the potential to utilise larger multi-subject datasets to train an EEG decoder without being influenced by negative transfer.

Conference paper

Faisal AA, 2021, Putting touch into action, Science, Vol: 372, Pages: 791-792, ISSN: 0036-8075

Journal article

Patel BV, Haar S, Handslip R, Auepanwiriyakul C, Lee TM-L, Patel S, Harston JA, Hosking-Jervis F, Kelly D, Sanderson B, Borgatta B, Tatham K, Welters I, Camporota L, Gordon AC, Komorowski M, Antcliffe D, Prowle JR, Puthucheary Z, Faisal AAet al., 2021, Natural history, trajectory, and management of mechanically ventilated COVID-19 patients in the United Kingdom, Intensive Care Medicine, Vol: 47, Pages: 549-565, ISSN: 0342-4642

PurposeThe trajectory of mechanically ventilated patients with coronavirus disease 2019 (COVID-19) is essential for clinical decisions, yet the focus so far has been on admission characteristics without consideration of the dynamic course of the disease in the context of applied therapeutic interventions.MethodsWe included adult patients undergoing invasive mechanical ventilation (IMV) within 48 h of intensive care unit (ICU) admission with complete clinical data until ICU death or discharge. We examined the importance of factors associated with disease progression over the first week, implementation and responsiveness to interventions used in acute respiratory distress syndrome (ARDS), and ICU outcome. We used machine learning (ML) and Explainable Artificial Intelligence (XAI) methods to characterise the evolution of clinical parameters and our ICU data visualisation tool is available as a web-based widget (https://www.CovidUK.ICU).ResultsData for 633 adults with COVID-19 who underwent IMV between 01 March 2020 and 31 August 2020 were analysed. Overall mortality was 43.3% and highest with non-resolution of hypoxaemia [60.4% vs17.6%; P < 0.001; median PaO2/FiO2 on the day of death was 12.3(8.9–18.4) kPa] and non-response to proning (69.5% vs.31.1%; P < 0.001). Two ML models using weeklong data demonstrated an increased predictive accuracy for mortality compared to admission data (74.5% and 76.3% vs 60%, respectively). XAI models highlighted the increasing importance, over the first week, of PaO2/FiO2 in predicting mortality. Prone positioning improved oxygenation only in 45% of patients. A higher peak pressure (OR 1.42[1.06–1.91]; P < 0.05), raised respiratory component (OR 1.71[ 1.17–2.5]; P < 0.01) and cardiovascular component (OR 1.36 [1.04–1.75]; P < 0.05) of the sequential organ failure assessment (SOFA) score and raised lactate (OR 1.33 [0.99–1.79

Journal article

Dudley WL, Faisal A, Shafti SA, 2021, Real-world to virtual - flexible and scalable investigations of human-agent collaboration, CHI Workshop 2021

Conference paper

Pieritz S, Khwaja M, Faisal A, Matic Aet al., 2021, Personalised recommendations in mental health Apps: the impact of autonomy and data sharing, ACM Conference on Human Factors in Computing Systems (CHI), Publisher: ACM, Pages: 1-12

The recent growth of digital interventions for mental well-being prompts a call-to-arms to explore the delivery of personalised recommendations from a user's perspective. In a randomised placebo study with a two-way factorial design, we analysed the difference between an autonomous user experience as opposed to personalised guidance, with respect to both users’ preference and their actual usage of a mental well-being app. Furthermore, we explored users’ preference in sharing their data for receiving personalised recommendations, by juxtaposing questionnaires and mobile sensor data. Interestingly, self-reported results indicate the preference for personalised guidance, whereas behavioural data suggests that a blend of autonomous choice and recommended activities results in higher engagement. Additionally, although users reported a strong preference of filling out questionnaires instead of sharing their mobile data, the data source did not have any impact on the actual app use. We discuss the implications of our findings and provide takeaways for designers of mental well-being applications.

Conference paper

Kadirvelu B, Burcea G, Quint JK, Costelloe CE, Faisal AAet al., 2021, Covid-19 does not look like what you are looking for: Clustering symptoms by nation and multi-morbidities reveal substantial differences to the classical symptom triad

<jats:title>ABSTRACT</jats:title><jats:p>COVID-19 is by convention characterised by a triad of symptoms: cough, fever and loss of taste/smell. The aim of this study was to examine clustering of COVID-19 symptoms based on underlying chronic disease and geographical location. Using a large global symptom survey of 78,299 responders in 190 different countries, we examined symptom profiles in relation to geolocation (grouped by country) and underlying chronic disease (single, co- or multi-morbidities) associated with a positive COVID-19 test result using statistical and machine learning methods to group populations by underlying disease, countries, and symptoms. Taking the responses of 7980 responders with a COVID-19 positive test in the top 5 contributing countries, we find that the most frequently reported symptoms differ across the globe: For example, fatigue 4108(51.5%), headache 3640(45.6%) and loss of smell and taste 3563(44.6%) are the most reported symptoms globally. However, symptom patterns differ by continent; India reported a significantly lower proportion of headache (22.8% vs 45.6%, p&lt;0.05) and itchy eyes (7.0% vs. 15.3%, p&lt;0.05) than other countries, as does Pakistan (33.6% vs 45.6%, p&lt;0.05 and 8.6% vs 15.3%, p&lt;0.05). Mexico and Brazil report significantly less of these symptoms. As with geographic location, we find people differed in their reported symptoms, if they suffered from specific underlying diseases. For example, COVID-19 positive responders with asthma or other lung disease were more likely to report shortness of breath as a symptom, compared with COVID-19 positive responders who had no underlying disease (25.3% vs. 13.7%, p&lt;0.05, and 24.2 vs.13.7%, p&lt;0.05). Responders with no underlying chronic diseases were more likely to report loss of smell and tastes as a symptom (46%), compared with the responders with type 1 diabetes (21.3%), Type 2 diabetes (33.5%) lung disease (29.3%), or hype

Journal article

Faisal A, Lannou EL, Post B, Haar S, Brett S, Kadirvelu Bet al., 2021, Clustering of patient comorbidities within electronic medical records enables high-precision COVID-19 mortality prediction

<jats:title>Abstract</jats:title> <jats:p>We present an explainable AI framework to predict mortality after a positive COVID-19 diagnosis based solely on data routinely collected in electronic healthcare records (EHRs) obtained prior to diagnosis. We grounded our analysis on the ½ Million people UK Biobank and linked NHS COVID-19 records. We developed a method to capture the complexities and large variety of clinical codes present in EHRs and we show that these have a larger impact on risk than all other patient data but age. We use a form of clustering for natural language processing of the clinical codes, specifically, topic modelling by Latent Dirichlet Allocation (LDA), to generate a succinct digital fingerprint of a patient’s full secondary care clinical history, i.e. their comorbidities and past interventions. These digital comorbidity fingerprints offer immediately interpretable clinical descriptions that are meaningful, e.g. grouping cardiovascular disorders with common risk factors but also novel groupings that are not obvious. The comorbidity fingerprints differ in both their breadth and depth from existing observational disease associations in the COVID-19 literature. Taking this data-driven approach allows us to avoid human-induction bias and confirmation bias during selection of what are important potential predictors of COVID-19 mortality. Together with age these digital fingerprints are the single most important factor in our predictor. This holds the potential for improving individual risk profiling for clinical decisions and the identification of groups for public health interventions such as vaccine programmes. Combining our digital precondition fingerprints with demographic characteristics allow us to match or exceed the performance of existing state-of-the-art COVID-19 mortality predictors (EHCF) which have been developed through expert consensus. Our precondition fingerprinting and entire mortality predictio

Journal article

Lannou EL, Post B, Haar S, Brett SJ, Kadirvelu B, Faisal AAet al., 2021, Clustering of patient comorbidities within electronic medical records enables high-precision COVID-19 mortality prediction

<jats:title>Abstract</jats:title><jats:p>We present an explainable AI framework to predict mortality after a positive COVID-19 diagnosis based solely on data routinely collected in electronic healthcare records (EHRs) obtained prior to diagnosis. We grounded our analysis on the ½ Million people UK Biobank and linked NHS COVID-19 records. We developed a method to capture the complexities and large variety of clinical codes present in EHRs, and we show that these have a larger impact on risk than all other patient data but age. We use a form of clustering for natural language processing of the clinical codes, specifically, topic modelling by Latent Dirichlet Allocation (LDA), to generate a succinct digital fingerprint of a patient’s full secondary care clinical history, i.e. their comorbidities and past interventions. These digital comorbidity fingerprints offer immediately interpretable clinical descriptions that are meaningful, e.g. grouping cardiovascular disorders with common risk factors but also novel groupings that are not obvious. The comorbidity fingerprints differ in both their breadth and depth from existing observational disease associations in the COVID-19 literature. Taking this data-driven approach allows us to avoid human-induction bias and confirmation bias during selection of what are important potential predictors of COVID-19 mortality. Together with age, these digital fingerprints are the single most important factor in our predictor. This holds the potential for improving individual risk profiling for clinical decisions and the identification of groups for public health interventions such as vaccine programmes. Combining our digital precondition fingerprints with demographic characteristics allow us to match or exceed the performance of existing state-of-the-art COVID-19 mortality predictors (EHCF) which have been developed through expert consensus. Our precondition fingerprinting and entire mortality prediction anal

Journal article

Shafti SA, Tjomsland J, Dudley W, Faisal Aet al., 2021, Real-world human-robot collaborative reinforcement learning, IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Publisher: IEEE, Pages: 11161-11166, ISSN: 2153-0866

The intuitive collaboration of humans and intel-ligent robots (embodied AI) in the real-world is an essentialobjective for many desirable applications of robotics. Whilstthere is much research regarding explicit communication, wefocus on how humans and robots interact implicitly, on motoradaptation level. We present a real-world setup of a human-robot collaborative maze game, designed to be non-trivial andonly solvable through collaboration, by limiting the actions torotations of two orthogonal axes, and assigning each axes to oneplayer. This results in neither the human nor the agent beingable to solve the game on their own. We use deep reinforcementlearning for the control of the robotic agent, and achieve resultswithin 30 minutes of real-world play, without any type ofpre-training. We then use this setup to perform systematicexperiments on human/agent behaviour and adaptation whenco-learning a policy for the collaborative game. We presentresults on how co-policy learning occurs over time between thehuman and the robotic agent resulting in each participant’sagent serving as a representation of how they would play thegame. This allows us to relate a person’s success when playingwith different agents than their own, by comparing the policyof the agent with that of their own agent.

Conference paper

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00539811&limit=30&person=true