221 results found
Kadirvelu B, Bellido Bel T, Wu X, et al., 2023, Mindcraft, a mobile mental health monitoring platform for children and young people: development and acceptability pilot study, JMIR Formative Research, Vol: 7, Pages: 1-13, ISSN: 2561-326X
BACKGROUND: Children and young people's mental health is a growing public health concern, which is further exacerbated by the COVID-19 pandemic. Mobile health apps, particularly those using passive smartphone sensor data, present an opportunity to address this issue and support mental well-being. OBJECTIVE: This study aimed to develop and evaluate a mobile mental health platform for children and young people, Mindcraft, which integrates passive sensor data monitoring with active self-reported updates through an engaging user interface to monitor their well-being. METHODS: A user-centered design approach was used to develop Mindcraft, incorporating feedback from potential users. User acceptance testing was conducted with a group of 8 young people aged 15-17 years, followed by a pilot test with 39 secondary school students aged 14-18 years, which was conducted for a 2-week period. RESULTS: Mindcraft showed encouraging user engagement and retention. Users reported that they found the app to be a friendly tool helping them to increase their emotional awareness and gain a better understanding of themselves. Over 90% of users (36/39, 92.5%) answered all active data questions on the days they used the app. Passive data collection facilitated the gathering of a broader range of well-being metrics over time, with minimal user intervention. CONCLUSIONS: The Mindcraft app has shown promising results in monitoring mental health symptoms and promoting user engagement among children and young people during its development and initial testing. The app's user-centered design, the focus on privacy and transparency, and a combination of active and passive data collection strategies have all contributed to its efficacy and receptiveness among the target demographic. By continuing to refine and expand the app, the Mindcraft platform has the potential to contribute meaningfully to the field of mental health care for young people.
Wei X, Faisal AA, 2023, Federated deep transfer learning for EEG decoding using multiple BCI tasks, 11th International IEEE/EMBS Conference on Neural Engineering (NER 2023), Publisher: IEEE, ISSN: 1948-3554
Deep learning has been successful in BCI decoding. However, it is very data-hungry and requires pooling data from multiple sources. EEG data from various sources decrease thedecoding performance due to negative transfer . Recently, transfer learning for EEG decoding has been suggested as aremedy ,  and become subject to recent BCI competitions (e.g. BEETL ), but there are two complications in combining data from many subjects. First, privacy is not protected as highly personal brain data needs to be shared (and copied across increasingly tight information governance boundaries). Moreover, BCI data are collected from different sources and are often based on different BCI tasks, which has been thought to limit theirreusability. Here, we demonstrate a federated deep transfer learning technique, the Multi-dataset Federated Separate-Common-Separate Network (MF-SCSN) based on our previous work of SCSN , which integrates privacy-preserving properties into deep transfer learning to utilise data sets with different tasks. This framework trains a BCI decoder using different source data sets obtained from different imagery tasks (e.g. some data sets with hands and feet, vs others with single hands and tongue, etc). Therefore, by introducing privacy-preserving transfer learning techniques, we unlock the reusability and scalability of existing BCI data sets. We evaluated our federated transfer learning method on the NeurIPS 2021 BEETL competition BCI task. The proposed architecture outperformed the baseline decoder by 3%. Moreover, compared with the baseline and other transfer learningalgorithms, our method protects the privacy of the brain data from different data centres.
Wannawas N, Faisal AA, 2023, Towards AI-controlled FES-restoration of arm movements: controlling for progressive muscular fatigue with Gaussian State-Space Models, 2023 11th International IEEE/EMBS Conference on Neural Engineering (NER), Publisher: IEEE, Pages: 1-4, ISSN: 1948-3554
Reaching disability limits an individual's ability in performing daily tasks. Surface Functional Electrical Stimulation (FES) offers a non-invasive solution to restore the lost abilities. However, inducing desired movements using FES is still an open engineering problem. This problem is accentuated by the complexities of human arms' neuromechanics and the variations across individuals. Reinforcement Learning (RL) emerges as a promising approach to govern customised control rules for different subjects and settings. Yet, one remaining challenge of using RL to control FES is unobservable muscle fatigue that progressively changes as an unknown function of the stimulation, breaking the Markovian assumption of RL. In this work, we present a method to address the unobservable muscle fatigue issue, allowing our RL controller to achieve higher control performances. Our method is based on a Gaussian State-Space Model (GSSM) that utilizes recurrent neural networks to learn Markovian state-spaces from partial observations. The GSSM is used as a filter that converts the observations into the state-space representation for RL to preserve the Markovian assumption. Here, we start with presenting the modification of the original GSSM to address an overconfident issue. We then present the interaction between RL and the modified GSSM, followed by the setup for FES control learning. We test our RL-GSSM system on a planar reaching setting in simulation using a detailed neuromechanical model and show that the GSSM can help RL maintain its control performance against the fatigue.
Wannawas N, Faisal AA, 2023, Towards AI-controlled FES-restoration of arm movements: neuromechanics-based reinforcement learning for 3-D reaching, 2023 11th International IEEE/EMBS Conference on Neural Engineering (NER), Publisher: IEEE, Pages: 1-4, ISSN: 1948-3554
Reaching disabilities affect the quality of life. Functional Electrical Stimulation (FES) can restore lost motor functions. Yet, there remain challenges in controlling FES to induce desired movements. Neuromechanical models are valuable tools for developing FES control methods. However, focusing on the upper extremity areas, several existing models are either overly simplified or too computationally demanding for control purposes. Besides the model-related issues, finding a general method for governing the control rules for different tasks and subjects remains an engineering challenge. Here, we present our approach toward FES-based restoration of arm movements to address those fundamental issues in controlling FES. Firstly, we present our surface-FES-oriented neuromechanical models of human arms built using well-accepted, open-source software. The models are designed to capture significant dynamics in FES controls with minimal computational cost. Our models are customisable and can be used for testing different control methods. Secondly, we present the application of reinforcement learning (RL) as a general method for governing the control rules. In combination, our customisable models and RL-based control method open the possibility of delivering customised FES controls for different subjects and settings with minimal engineering intervention. We demonstrate our approach in planar and 3D settings.
Ricotti V, Balasundaram K, Victoria S, et al., 2023, Wearable full-body motion tracking of daily-life activities predicts disease trajectory in Duchenne Muscular Dystrophy, Nature Medicine, Vol: 29, Pages: 95-103, ISSN: 1078-8956
Artificial intelligence has the potential to revolutionize health care, yet clinical trials in neurological diseases continue to rely on subjective, semiquantitative and motivation-dependent endpoints for drug development. To overcome this limitation, we collected digital readout of whole-body movement behaviour of Duchenne muscular dystrophy patients (n=21) and age-matched controls (n=17). Movement behaviour was assessed while the participant engaged in everyday activities using a 17-sensor body suit during 3 clinical visits over the course of 12 months. We first defined novel movement behavioural fingerprints capable of distinguishing DMD from controls. Then, we used machine learning algorithms that combined the behavioural fingerprints to make cross-sectional and longitudinal disease course predictions, which out-performed predictions derived from currently used clinical assessments. Finally, using Bayesian Optimization, we constructed a behavioural biomarker, termed the KineDMD ethomic biomarker, that is derived from daily-life behavioural data and whose value progresses with age in an S-shaped sigmoid curve form. By combining an approach that embraces daily life movement motor behaviour with machine learning, our biomarker provides a potential pathway for determining when a new therapy effect occurs or weans off.
Kadirvelu B, Gavriel C, Nageshwaran S, et al., 2023, A wearable motion capture suit and machine learning predict disease progression in Friedreich's ataxia., Nature Medicine, Vol: 29, Pages: 86-94, ISSN: 1078-8956
Friedreich's ataxia (FA) is caused by a variant of the Frataxin (FXN) gene, leading to its downregulation and progressively impaired cardiac and neurological function. Current gold-standard clinical scales use simplistic behavioral assessments, which require 18- to 24-month-long trials to determine if therapies are beneficial. Here we captured full-body movement kinematics from patients with wearable sensors, enabling us to define digital behavioral features based on the data from nine FA patients (six females and three males) and nine age- and sex-matched controls, who performed the 8-m walk (8-MW) test and 9-hole peg test (9 HPT). We used machine learning to combine these features to longitudinally predict the clinical scores of the FA patients, and compared these with two standard clinical assessments, Spinocerebellar Ataxia Functional Index (SCAFI) and Scale for the Assessment and Rating of Ataxia (SARA). The digital behavioral features enabled longitudinal predictions of personal SARA and SCAFI scores 9 months into the future and were 1.7 and 4 times more precise than longitudinal predictions using only SARA and SCAFI scores, respectively. Unlike the two clinical scales, the digital behavioral features accurately predicted FXN gene expression levels for each FA patient in a cross-sectional manner. Our work demonstrates how data-derived wearable biomarkers can track personal disease trajectories and indicates the potential of such biomarkers for substantially reducing the duration or size of clinical trials testing disease-modifying therapies and for enabling behavioral transcriptomics.
Wei X, Faisal AA, Grosse-Wentrup M, et al., 2022, 2021 BEETL competition: advancing transfer learning for subject independence and heterogenous EEG data sets, NeurIPS 2021 Competitions and Demonstrations Track, Proceedings of Machine Learning Research, Publisher: PMLR, Pages: 1-16
Transfer learning and meta-learning offer some of the most promising avenues to unlock the scalability of healthcare and consumer technologies driven by biosignal data. This is because regular machine learning methods cannot generalise well across human subjects and handle learning from different, heterogeneously collected data sets, thus limiting the scale of training data available. On the other hand, the many developments in transfer- and meta-learning fields would benefit significantly from a real-world benchmark with immediate practical application. Therefore, we pick electroencephalography (EEG) as an exemplar for all the things that make biosignal data analysis a hard problem. We design two transfer learning challenges around a. clinical diagnostics and b. neurotechnology. These two challenges are designed to probe algorithmic performance with all the challenges of biosignal data, such as low signal-to-noise ratios, major variability among subjects, differences in the data recording sessions and techniques, and even between the specific BCI tasks recorded in the dataset. Task 1 is centred on the field of medical diagnostics, addressing automatic sleep stage annotation across subjects. Task 2 is centred on Brain-Computer Interfacing (BCI), addressing motor imagery decoding across both subjects and data sets. The successful 2021 BEETL competition with its over 30 competing teams and its 3 winning entries brought attention to the potential of deep transfer learning and combinations of set theory and conventional machine learning techniques to overcome the challenges. The results set a new state-of-the-art for the real-world BEETL benchmarks.
Post B, Badea C, Faisal A, et al., 2022, Breaking bad news in the era of artificial intelligence and algorithmic medicine: an exploration of disclosure and its ethical justification using the hedonic calculus, AI and Ethics, ISSN: 2730-5961
An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has become a key desirable with the increasingly widespread deployment of this technology. Advances in AI hold the promise of improving the precision of outcome prediction at the level of the individual. However, the addition of these technologies to patient–clinician interactions, as with any complex human interaction, has potential pitfalls. While physicians have always had to carefully consider the ethical background and implications of their actions, detailed deliberations around fast-moving technological progress may not have kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news (likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developed in the eighteenth century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI. We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustive domains, whether an AI-supported action can be morally justified.
Honeyford K, Expert P, Mendelsohn EE, et al., 2022, Challenges and recommendations for high quality research using electronic health records, Frontiers in Digital Health, Vol: 4, ISSN: 2673-253X
Harnessing Real World Data is vital to improve health care in the 21st Century. Data from Electronic Health Records (EHRs) are a rich source of patient centred data, including information on the patient's clinical condition, laboratory results, diagnoses and treatments. They thus reflect the true state of health systems. However, access and utilisation of EHR data for research presents specific challenges. We assert that using data from EHRs effectively is dependent on synergy between researchers, clinicians and health informaticians, and only this will allow state of the art methods to be used to answer urgent and vital questions for patient care. We propose that there needs to be a paradigm shift in the way this research is conducted - appreciating that the research process is iterative rather than linear. We also make specific recommendations for organisations, based on our experience of developing and using EHR data in trusted research environments.
Festor P, Jia Y, Gordon A, et al., 2022, Assuring the safety of AI-based clinical decision support systems: a case study of the AI Clinician for sepsis treatment, BMJ Health & Care Informatics, Vol: 29, ISSN: 2632-1009
Study objectives: Establishing confidence in the safety of AI-based clinical decision support systems is important prior to clinical deployment and regulatory approval for systems with increasing autonomy. Here, we undertook safety assurance of the AI Clinician, a previously published reinforcement learning-based treatment recommendation system for sepsis. Methods: As part of the safety assurance, we defined four clinical hazards in sepsis resuscitation based on clinical expert opinion and the existing literature. We then identified a set of unsafe scenarios and created safety constraints, intended to limit the action space of the AI agent with the goal of reducing the likelihood of hazardous decisions.Results: Using a subset of the MIMIC-III database, we demonstrated that our previously published “AI Clinician” recommended fewer hazardous decisions than human clinicians in three out of our four pre-defined clinical scenarios, while the difference was not statistically significant in the fourth scenario. Then, we modified the reward function to satisfy our safety constraints and trained a new AI Clinician agent. The retrained model shows enhanced safety, without negatively impacting model performance.Discussion: While some contextual patient information absent from the data may have pushed human clinicians to take hazardous actions, the data was curated to limit the impact of this confounder.Conclusion: These advances provide a use case for the systematic safety assurance of AI-based clinical systems, towards the generation of explicit safety evidence, which could be replicated for other AI applications or other clinical contexts, and inform medical device regulatory bodies.
Post B, Badea C, Faisal A, et al., 2022, Breaking Bad News in the Era of Artificial Intelligence and Algorithmic Medicine: An Exploration of Disclosure and its Ethical Justification using the Hedonic Calculus
An appropriate ethical framework around the use of Artificial Intelligence (AI) in healthcare has becomea key desirable with the increasingly widespread deployment of this technology. Advances in AI hold thepromise of improving the precision of outcome prediction at the level of the individual. However, theaddition of these technologies to patient-clinician interactions, as with any complex human interaction,has potential pitfalls. While physicians have always had to carefully consider the ethical background andimplications of their actions, detailed deliberations around fast-moving technological progress may nothave kept up. We use a common but key challenge in healthcare interactions, the disclosure of bad news(likely imminent death), to illustrate how the philosophical framework of the 'Felicific Calculus' developedin the 18th century by Jeremy Bentham, may have a timely quasi-quantitative application in the age of AI.We show how this ethical algorithm can be used to assess, across seven mutually exclusive and exhaustivedomains, whether an AI-supported action can be morally justified.
Currie SP, Ammer JJ, Premchand B, et al., 2022, Movement-specific signaling is differentially distributed across motor cortex layer 5 projection neuron classes, CELL REPORTS, Vol: 39, ISSN: 2211-1247
Liu Y, Leib R, Dudley W, et al., 2022, The role of haptic communication in dyadic collaborative object manipulation tasks, Publisher: arXiv
Intuitive and efficient physical human-robot collaboration relies on themutual observability of the human and the robot, i.e. the two entities beingable to interpret each other's intentions and actions. This is remedied by amyriad of methods involving human sensing or intention decoding, as well ashuman-robot turn-taking and sequential task planning. However, the physicalinteraction establishes a rich channel of communication through forces, torquesand haptics in general, which is often overlooked in industrial implementationsof human-robot interaction. In this work, we investigate the role of haptics inhuman collaborative physical tasks, to identify how to integrate physicalcommunication in human-robot teams. We present a task to balance a ball at atarget position on a board either bimanually by one participant, or dyadicallyby two participants, with and without haptic information. The task requiresthat the two sides coordinate with each other, in real-time, to balance theball at the target. We found that with training the completion time and numberof velocity peaks of the ball decreased, and that participants gradually becameconsistent in their braking strategy. Moreover we found that the presence ofhaptic information improved the performance (decreased completion time) and ledto an increase in overall cooperative movements. Overall, our results show thathumans can better coordinate with one another when haptic feedback isavailable. These results also highlight the likely importance of hapticcommunication in human-robot physical interaction, both as a tool to inferhuman intentions and to make the robot behaviour interpretable to humans.
Kadirvelu B, Burcea G, Costello C, et al., 2022, Variation in global COVID-19 symptoms by geography and by chronic disease: a global survey using the COVID-19 Symptom Mapper, EClinicalMedicine, Vol: 45, Pages: 1-15, ISSN: 2589-5370
BackgroundCOVID-19 is typically characterised by a triad of symptoms: cough, fever and loss of taste and smell, however, this varies globally. This study examines variations in COVID-19 symptom profiles based on underlying chronic disease and geographical location.MethodsUsing a global online symptom survey of 78,299 responders in 190 countries between 09/04/2020 and 22/09/2020, we conducted an exploratory study to examine symptom profiles associated with a positive COVID-19 test result by country and underlying chronic disease (single, co- or multi-morbidities) using statistical and machine learning methods.FindingsFrom the results of 7980 COVID-19 tested positive responders, we find that symptom patterns differ by country. For example, India reported a lower proportion of headache (22.8% vs 47.8%, p<1e-13) and itchy eyes (7.3% vs. 16.5%, p=2e-8) than other countries. As with geographic location, we find people differed in their reported symptoms if they suffered from specific chronic diseases. For example, COVID-19 positive responders with asthma (25.3% vs. 13.7%, p=7e-6) were more likely to report shortness of breath compared to those with no underlying chronic disease. InterpretationWe have identified variation in COVID-19 symptom profiles depending on geographic location and underlying chronic disease. Failure to reflect this symptom variation in public health messaging may contribute to asymptomatic COVID-19 spread and put patients with chronic diseases at a greater risk of infection. Future work should focus on symptom profile variation in the emerging variants of the SARS-CoV-2 virus. This is crucial to speed up clinical diagnosis, predict prognostic outcomes and target treatment.
Shafti A, Derks V, Kay H, et al., 2022, The response shift paradigm to quantify human trust in AI recommendations, Publisher: arXiv
Explainability, interpretability and how much they affect human trust in AIsystems are ultimately problems of human cognition as much as machine learning,yet the effectiveness of AI recommendations and the trust afforded by end-usersare typically not evaluated quantitatively. We developed and validated ageneral purpose Human-AI interaction paradigm which quantifies the impact of AIrecommendations on human decisions. In our paradigm we confronted human userswith quantitative prediction tasks: asking them for a first response, beforeconfronting them with an AI's recommendations (and explanation), and thenasking the human user to provide an updated final response. The differencebetween final and first responses constitutes the shift or sway in the humandecision which we use as metric of the AI's recommendation impact on the human,representing the trust they place on the AI. We evaluated this paradigm onhundreds of users through Amazon Mechanical Turk using a multi-branchedexperiment confronting users with good/poor AI systems that had good, poor orno explainability. Our proof-of-principle paradigm allows one to quantitativelycompare the rapidly growing set of XAI/IAI approaches in terms of their effecton the end-user and opens up the possibility of (machine) learning trust.
Festor P, Shafti A, Harston A, et al., 2022, MIDAS: Deep learning human action intention prediction from natural eye movement patterns, Publisher: arXiv
Eye movements have long been studied as a window into the attentionalmechanisms of the human brain and made accessible as novelty stylehuman-machine interfaces. However, not everything that we gaze upon, issomething we want to interact with; this is known as the Midas Touch problemfor gaze interfaces. To overcome the Midas Touch problem, present interfacestend not to rely on natural gaze cues, but rather use dwell time or gazegestures. Here we present an entirely data-driven approach to decode humanintention for object manipulation tasks based solely on natural gaze cues. Werun data collection experiments where 16 participants are given manipulationand inspection tasks to be performed on various objects on a table in front ofthem. The subjects' eye movements are recorded using wearable eye-trackersallowing the participants to freely move their head and gaze upon the scene. Weuse our Semantic Fovea, a convolutional neural network model to obtain theobjects in the scene and their relation to gaze traces at every frame. We thenevaluate the data and examine several ways to model the classification task forintention prediction. Our evaluation shows that intention prediction is not anaive result of the data, but rather relies on non-linear temporal processingof gaze cues. We model the task as a time series classification problem anddesign a bidirectional Long-Short-Term-Memory (LSTM) network architecture todecode intentions. Our results show that we can decode human intention ofmotion purely from natural gaze cues and object relative position, with$91.9\%$ accuracy. Our work demonstrates the feasibility of natural gaze as aZero-UI interface for human-machine interaction, i.e., users will only need toact naturally, and do not need to interact with the interface itself or deviatefrom their natural eye movement patterns.
Subramanian M, Faisal AA, 2022, Natural Gaze Informatics: Toward Intelligence Assisted Wheelchair Mobility, Neuromethods, Pages: 97-112
Controllers such as the joystick, sip “n” puff, and head mount (gyro control) in the electric wheelchair cannot accustom to people with severe disabilities. Artificial intelligence (AI) mediated mobility could be a suitable solution, but the human gaze is rarely included in the loop. This chapter focuses on natural gaze informatics for intelligence assisted wheelchair mobility. Based on the natural gaze behavior during joystick-controlled wheelchair navigation in different scenarios, we developed two decoders for natural gaze-based powered wheelchair control. “Semantic Empirical Bayesian Decoder” was found to be most intuitive and user-friendly especially, while navigating in more complex environments. Control of backward motion, one of the major drawbacks of our previous “Look Where You Want To Go” interface, was solved using the “Semantic Empirical Bayesian Decoder.” The “Continuous Control Field Decoder” was preferred when the users had to follow a straight trajectory. Furthermore, we have harvested natural gaze informatics to decode user’s driving intention, which in fusion with an autonomous wheelchair platform allows users to accomplish their routing and steering requirement, thus enabling a natural gaze data-driven powered wheelchair. The human-in-the-loop approach combined with AI systems allows subjects to navigate their wheelchair without the need to interact with a controller in dynamic urban environments. Our natural gaze informatics-based AI control modules provide a new approach to enable wheelchair users to navigate indoors, and we aim to provide an independent urban continuum to the severely disabled and the elderly.
Harston JA, Faisal AA, 2022, Methods and Models of Eye-Tracking in Natural Environments, Neuromethods, Pages: 49-68
Mobile head-free eye-tracking is one of the most valuable methods we have in vision science for understanding the distribution and dynamics of attention in natural real-world tasks. However, mobile eye-tracking is still a somewhat nascent field, and experimental setups with such devices are not yet fully mature enough for consistently reliable investigation of real-world gaze behavior. Here, we review the development of eye-trackers from their inception to the current state of the art and discuss the experimental methodologies and technologies one can use to investigate natural goal-directed real-world gaze behavior in fully ecological experimental setups. We subsequently expand on the experimental approaches to discuss the modelling approaches used in the field with eye-tracking data, from conventional 2D saliency modelling to more fully embodied gaze approaches that incorporate gaze and motor behavior, which allow us to predict gaze dynamics in fully head-free experimental setups.
Blum KP, Grogan M, Wu Y, et al., 2021, Predicting proprioceptive cortical anatomy and neural coding with topographic autoencoders
<jats:p>Proprioception is one of the least understood senses, yet fundamental for the control of movement. Even basic questions of how limb pose is represented in the somatosensory cortex are unclear. We developed a topographic variational autoencoder with lateral connectivity (topo-VAE) to compute a putative cortical map from a large set of natural movement data. Although not fitted to neural data, our model reproduces two sets of observations from monkey centre-out reaching: 1. The shape and velocity dependence of proprioceptive receptive fields in hand-centered coordinates despite the model having no knowledge of arm kinematics or hand coordinate systems. 2. The distribution of neuronal preferred directions (PDs) recorded from multi-electrode arrays. The model makes several testable predictions: 1. Encoding across the cortex has a blob-and-pinwheel-type geometry PDs. 2. Few neurons will encode just a single joint. Topo-VAE provides a principled basis for understanding of sensorimotor representations, and the theoretical basis of neural manifolds, with applications to the restoration of sensory feedback in brain-computer interfaces and the control of humanoid robots.</jats:p>
Shafti SA, Haar Millo S, Mio Zaldivar R, et al., 2021, Playing the piano with a robotic third thumb: Assessing constraints of human augmentation, Scientific Reports, Vol: 11, Pages: 1-14, ISSN: 2045-2322
Contemporary robotics gives us mechatronic capabilities for augmenting human bodies with extra limbs. However, how our motor control capabilities pose limits on such augmentation is an open question. We developed a Supernumerary Robotic 3rd Thumbs (SR3T) with two degrees-of-freedom controlled by the user’s body to endow them with an extra contralateral thumb on the hand. We demonstrate that a pianist can learn to play the piano with 11 fingers within an hour. We then evaluate 6 naïve and 6 experienced piano players in their prior motor coordination and their capability in piano playing with the robotic augmentation. We show that individuals’ augmented performance with the SR3T could be explained by our new custom motor coordination assessment, the Human Augmentation Motor Coordination Assessment (HAMCA) performed pre-augmentation. Our work demonstrates how supernumerary robotics can augment humans in skilled tasks and that individual differences in their augmentation capability are explainable by their individual motor coordination abilities.
Maimon-Mor RO, Schone HR, Henderson Slater D, et al., 2021, Early life experience sets hard limits on motor learning as evidenced from artificial arm use, ELIFE, Vol: 10, ISSN: 2050-084X
Parkin B, Daws R, Das Neves I, et al., 2021, Dissociable effects of age and Parkinson's disease on instruction based learning, Brain Communications, Vol: 3, ISSN: 2632-1297
The cognitive deficits associated with Parkinson’s disease vary across individuals and change across time, with implications for prognosis and treatment. Key outstanding challenges are to define the distinct behavioural characteristics of this disorder and develop diagnostic paradigms that can assess these sensitively in individuals. In a previous study, we measured different aspects of attentional control in Parkinson’s disease using an established fMRI switching paradigm. We observed no deficits for the aspects of attention the task was designed to examine; instead those with Parkinson’s disease learnt the operational requirements of the task more slowly. We hypothesized that a subset of people with early-to-mid stage Parkinson’s might be impaired when encoding rules for performing new tasks. Here, we directly test this hypothesis and investigate whether deficits in instruction-based learning represent a characteristic of Parkinson’s Disease. Seventeen participants with Parkinson’s disease (8 male; mean age: 61.2 years), 18 older adults (8 male; mean age: 61.3 years) and 20 younger adults (10 males; mean age: 26.7 years) undertook a simple instruction-based learning paradigm in the MRI scanner. They sorted sequences of coloured shapes according to binary discrimination rules that were updated at two-minute intervals. Unlike common reinforcement learning tasks, the rules were unambiguous, being explicitly presented; consequently, there was no requirement to monitor feedback or estimate contingencies. Despite its simplicity, a third of the Parkinson’s group, but only one older adult, showed marked increases in errors, 4 SD greater than the worst performing young adult. The pattern of errors was consistent, reflecting a tendency to misbind discrimination rules. The misbinding behaviour was coupled with reduced frontal, parietal and anterior caudate activity when rules were being encoded, but not when attention was initially o
Ortega P, Faisal A, 2021, Deep Learning multimodal fNIRS & EEG signals for bimanual grip force decoding, Journal of Neural Engineering, Vol: 18, Pages: 1-21, ISSN: 1741-2560
Objective Non-invasive BMI offer an alternative, safe and accessible way to interact with the environment. To enable meaningful and stable physical interactions, BMIs need to decode forces. Although previously addressed in the unimanual case, controlling forces from both hands would enable BMI-users to perform a greater range of interactions. We here investigate the decoding of hand-specific forces. Approach We maximise cortical information by using EEG and fNIRS and developing a deep-learning architecture with attention and residual layers (cnnatt) to improve their fusion. Our task required participants to generate hand-specific force profiles on which we trained and tested our deep-learning and linear decoders. Main results The use of EEG and fNIRS improved the decoding of bimanual force and the deep-learning models outperformed the linear model. In both cases, the greatest gain in performance was due to the detection of force generation. In particular, the detection of forces was hand-specific and better for the right dominant hand and cnnatt was better at fusing EEG and fNIRS. Consequently, the study of cnnatt revealed that forces from each hand were differently encoded at the cortical level. cnnatt also revealed traces of the cortical activity being modulated by the level of force which was not previously found using linear models. Significance Our results can be applied to avoid hand-cross talk during hand force decoding to increase the robustness of BMI robotic devices. In particular, we improve the fusion of EEG and fNIRS signals and offer hand-specific interpretability of the encoded forces which are valuable during motor rehabilitation assessment.
Festor P, Habil I, Jia Y, et al., 2021, Levels of Autonomy & Safety Assurance forAI-based Clinical Decision Systems, WAISE 2021 : 4th International Workshop on Artificial Intelligence Safety Engineering
Maimon-Mor RO, Schone HR, Henderson Slater D, et al., 2021, Author response: Early life experience sets hard limits on motor learning as evidenced from artificial arm use
Festor P, Luise G, Komorowski M, et al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare
Reinforcement Learning (RL) is emerging as toolfor tackling complex control and decision-makingproblems. However, in high-risk environmentssuch as healthcare, manufacturing, automotive oraerospace, it is often challenging to bridge the gapbetween an apparently optimal policy learned byan agent and its real-world deployment, due to theuncertainties and risk associated with it. Broadlyspeaking RL agents face two kinds of uncertainty,1. aleatoric uncertainty, which reflects randomness or noise in the dynamics of the world, and 2.epistemic uncertainty, which reflects the boundedknowledge of the agent due to model limitationsand finite amount of information/data the agenthas acquired about the world. These two typesof uncertainty carry fundamentally different implications for the evaluation of performance andthe level of risk or trust. Yet these aleatoric andepistemic uncertainties are generally confoundedas standard and even distributional RL is agnosticto this difference. Here we propose how a distributional approach (UA-DQN) can be recast torender uncertainties by decomposing the net effects of each uncertainty . We demonstrate theoperation of this method in grid world examplesto build intuition and then show a proof of concept application for an RL agent operating as aclinical decision support system in critical care.
Festor P, Luise G, Komorowski M, et al., 2021, ICML2021 workshop on Interpretable Machine Learning in Healthcare, ICML2021 workshop on Interpretable Machine Learning in Healthcare
Festor P, Luise G, Komorowski M, et al., 2021, Enabling risk-aware Reinforcement Learning for medical interventions through uncertainty decomposition, ICML2021 workshop on Interpretable Machine Learning in Healthcare
Stout D, Chaminade T, Apel J, et al., 2021, The measurement, evolution, and neural representation of action grammars of human behavior, Scientific Reports, Vol: 11, Pages: 1-13, ISSN: 2045-2322
Human behaviors from toolmaking to language are thought to rely on a uniquely evolved capacity for hierarchical action sequencing. Testing this idea will require objective, generalizable methods for measuring the structural complexity of real-world behavior. Here we present a data-driven approach for extracting action grammars from basic ethograms, exemplified with respect to the evolutionarily relevant behavior of stone toolmaking. We analyzed sequences from the experimental replication of ~ 2.5 Mya Oldowan vs. ~ 0.5 Mya Acheulean tools, finding that, while using the same “alphabet” of elementary actions, Acheulean sequences are quantifiably more complex and Oldowan grammars are a subset of Acheulean grammars. We illustrate the utility of our complexity measures by re-analyzing data from an fMRI study of stone toolmaking to identify brain responses to structural complexity. Beyond specific implications regarding the co-evolution of language and technology, this exercise illustrates the general applicability of our method to investigate naturalistic human behavior and cognition.
Khwaja M, Pieritz S, Faisal AA, et al., 2021, Personality and engagement with digital mental health interventions, Pages: 235-239
Personalisation is key to creating successful digital health applications. Recent evidence links personality and preference for digital experience - suggesting that psychometric traits can be a promising basis for personalisation of digital mental health services. However, there is still little quantitative evidence from actual app usage. In this study, we explore how different personality types engage with different intervention content in a commercial mental health application. Specifically, we collected the Big Five personality traits alongside the app usage data of 126 participants using a mobile mental health app for seven days. We found that personality traits significantly correlate with the engagement and user ratings of different intervention content. These findings open a promising research avenue that can inform the personalised delivery of digital mental health content and the creation of recommender systems, ultimately improving the effectiveness of mental health interventions.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.