147 results found
Haar S, van Assel CM, Faisal AA, 2019, Neurobehavioural signatures of learning that emerge in a real-world motor skill task
<jats:title>Abstract</jats:title><jats:p>The behavioral and neural processes of real-world motor learning remain largely unknown. We demonstrate the feasibility of human neuroscience in-the-wild using wearables for naturalistic full-body motion tracking and mobile brain imaging to study motor learning in billiards. Motor learning in-the-wild displays well-known features of reductionistic toy-tasks, such as multiple learning rates, generalization, and the relationship between motor variability and motor learning. However, we find that real-world motor learning affects the whole body, changing motor control from head to toe. Moreover, we discovered two groups of learners, not resolved before in toy-tasks. EEG dynamics of post-movement Beta rebound (PMBR) increase in the first but decrease in the second. Behaviorally, only the second group controls task-relevant variability dynamically and learns faster. We speculate that these groups emerge because subjects must combine multi-modal mechanisms of de-novo motor controller learning and motor adaptation in new ways when faced with the complexity of the real-world.</jats:p>
Gottesman O, Johansson F, Komorowski M, et al., 2019, Guidelines for reinforcement learning in healthcare, Nature Medicine, Vol: 25, Pages: 16-18, ISSN: 1078-8956
In this Comment, we provide guidelines for reinforcement learning for decisions about patient treatment that we hope will accelerate the rate at which observational cohorts can inform healthcare practice in a safe, risk-conscious manner.
Peng X, Ding Y, Wihl D, et al., 2018, Improving Sepsis Treatment Strategies by Combining Deep and Kernel-Based Reinforcement Learning., AMIA 2018 Annual Symposium, Pages: 887-896
Sepsis is the leading cause of mortality in the ICU. It is challenging to manage because individual patients respond differently to treatment. Thus, tailoring treatment to the individual patient is essential for the best outcomes. In this paper, we take steps toward this goal by applying a mixture-of-experts framework to personalize sepsis treatment. The mixture model selectively alternates between neighbor-based (kernel) and deep reinforcement learning (DRL) experts depending on patient's current history. On a large retrospective cohort, this mixture-based approach outperforms physician, kernel only, and DRL-only experts.
Liu Y, Gottesman O, Raghu A, et al., 2018, Representation Balancing MDPs for Off-Policy Policy Evaluation, Thirty-second Annual Conference on Neural Information Processing Systems (NIPS)
We study the problem of off-policy policy evaluation (OPPE) in RL. In contrastto prior work, we consider how to estimate both the individual policy value and average policy value accurately. We draw inspiration from recent work in causal reasoning, and propose a new finite sample generalization error bound for value estimates from MDP models. Using this upper bound as an objective, we develop a learning algorithm of an MDP model with a balanced representation, and show that our approach can yield substantially lower MSE in a common synthetic domain and on a challenging real-world sepsis management problem.
Parbhoo S, Gottesman O, Ross AS, et al., 2018, Improving counterfactual reasoning with kernelised dynamic mixing models, PLoS ONE, Vol: 13, ISSN: 1932-6203
Simulation-based approaches to disease progression allow us to make counterfactual predictions about the effects of an untried series of treatment choices. However, building accurate simulators of disease progression is challenging, limiting the utility of these approaches for real world treatment planning. In this work, we present a novel simulation-based reinforcement learning approach that mixes between models and kernel-based approaches to make its forward predictions. On two real world tasks, managing sepsis and treating HIV, we demonstrate that our approach both learns state-of-the-art treatment policies and can make accurate forward predictions about the effects of treatments on unseen patients.
Komorowski M, Celi LA, Badawi O, et al., 2018, The Artificial Intelligence Clinician learns optimal treatment strategies for sepsis in intensive care, Nature Medicine, ISSN: 1078-8956
Sepsis is the third leading cause of death worldwide and the main cause of mortality in hospitals1–3, but the best treatment strategy remains uncertain. In particular, evidence suggests that current practices in the administration of intravenous fluids and vasopressors are suboptimal and likely induce harm in a proportion of patients1,4–6. To tackle this sequential decision-making problem, we developed a reinforcement learning agent, the artificial intelligence (AI) Clinician, which learns from data to predict patient dynamics given specific treatment decisions. Our agent extracted implicit knowledge from an amount of patient data that exceeds many-fold the life-time experience of human clinicians and learned optimal treatment by having analysed myriads of (mostly sub-optimal) treatment decisions. We demonstrate that the value of the AI Clinician’s selected treatment is on average reliably higher than the human clinicians. In a large validation cohort independent from the training data, mortality was lowest in patients where clinicians’ actual doses matched the AI policy. Our model provides individualized and clinically interpretable treatment decisions for sepsis that could improve patient outcomes.
Ortega San Miguel P, Colas C, Faisal A, 2018, Compact convolutional neural networks for multi-class, personalised, Closed-loop EEG-BCI, 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics (BioRob 2018), Publisher: IEEE
For many people suffering from motor disabilities,assistive devices controlled with only brain activity are theonly way to interact with their environment . Naturaltasks often require different kinds of interactions, involvingdifferent controllers the user should be able to select in aself-paced way. We developed a Brain-Computer Interface(BCI) allowing users to switch between four control modesin a self-paced way in real-time. Since the system is devisedto be used in domestic environments in a user-friendly way,we selected non-invasive electroencephalographic (EEG) signalsand convolutional neural networks (CNNs), known for theirability to find the optimal features in classification tasks. Wetested our system using the Cybathlon BCI computer game,which embodies all the challenges inherent to real-time control.Our preliminary results show that an efficient architecture(SmallNet), with only one convolutional layer, can classify 4mental activities chosen by the user. The BCI system is run andvalidated online. It is kept up-to-date through the use of newlycollected signals along playing, reaching an online accuracyof47.6%where most approaches only report results obtainedoffline. We found that models trained with data collected onlinebetter predicted the behaviour of the system in real-time. Thissuggests that similar (CNN based) offline classifying methodsfound in the literature might experience a drop in performancewhen applied online. Compared to our previous decoder ofphysiological signals relying on blinks, we increased by a factor2 the amount of states among which the user can transit,bringing the opportunity for finer control of specific subtaskscomposing natural grasping in a self-paced way. Our resultsare comparable to those showed at the Cybathlon’s BCI Racebut further imp
Woods B, Subramanian M, Shafti A, et al., 2018, Mecbanomyograpby based closed-loop functional electrical stimulation cycling system, 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob), Publisher: IEEE, Pages: 179-184, ISSN: 2155-1774
Functional Electrical Stimulation (FES) systems are successful in restoring motor function and supporting paralyzed users. Commercially available FES products are open loop, meaning that the system is unable to adapt to changing conditions with the user and their muscles which results in muscle fatigue and poor stimulation protocols. This is because it is difficult to close the loop between stimulation and monitoring of muscle contraction using adaptive stimulation. FES causes electrical artefacts which make it challenging to monitor muscle contractions with traditional methods such as electromyography (EMG). We look to overcome this limitation by combining FES with novel mechanomyographic (MMG) sensors to be able to monitor muscle activity during stimulation in real time. To provide a meaningful task we built an FES cycling rig with a software interface that enabled us to perform adaptive recording and stimulation, and then combine this with sensors to record forces applied to the pedals using force sensitive resistors (FSRs); crank angle position using a magnetic incremental encoder and inputs from the user using switches and a potentiometer. We illustrated this with a closed-loop stimulation algorithm that used the inputs from the sensors to control the output of a programmable RehaStim 1 FES stimulator (Hasomed) in real-time. This recumbent bicycle rig was used as a testing platform for FES cycling. The algorithm was designed to respond to a change in requested speed (RPM) from the user and change the stimulation power (% of maximum current mA) until this speed was achieved and then maintain it.
Cunningham J, Hapsari A, Guilleminot P, et al., 2018, The Supernumerary Robotic 3<sup>rd</sup> Thumb for Skilled Music Tasks
© 2018 IEEE. Wearable robotics bring the opportunity to augment human capability and performance, be it through prosthetics, exoskeletons, or supernumerary robotic limbs. The latter concept allows enhancing human performance and assisting them in daily tasks. An important research question is, however, whether the use of such devices can lead to their eventual cognitive embodiment, allowing the user to adapt to them and use them seamlessly as any other limb of their own. This paper describes the creation of a platform to investigate this. Our supernumerary robotic 3rd thumb was created to augment piano playing, allowing a pianist to press piano keys beyond their natural hand-span; thus leading to functional augmentation of their skills and the technical feasibility to play with 11 fingers. The robotic finger employs sensors, motors, and a human interfacing algorithm to control its movement in real-time. A proof of concept validation experiment has been conducted to show the effectiveness of the robotic finger in playing musical pieces on a grand piano, showing that naive users were able to use it for 11 finger play within a few hours.
Maymó MR, Shafti A, Faisal AA, 2018, Fast Orient: Lightweight Computer Vision for Wrist Control in Assistive Robotic Grasping
© 2018 IEEE. Wearable and Assistive robotics for human grasp support are broadly either tele-operated robotic arms or act through orthotic control of a paralyzed user's hand. Such devices require correct orientation for successful and efficient grasping. In many human-robot assistive settings, the end-user is required to explicitly control the many degrees of freedom making effective or efficient control problematic. Here we are demonstrating the off-loading of low-level control of assistive robotics and active orthotics, through automatic end-effector orientation control for grasping. This paper describes a compact algorithm implementing fast computer vision techniques to obtain the orientation of the target object to be grasped, by segmenting the images acquired with a camera positioned on top of the end-effector of the robotic device. The rotation needed that optimises grasping is directly computed from the object's orientation. The algorithm has been evaluated in 6 different scene backgrounds and end-effector approaches to 26 different objects. 94.8% of the objects were detected in all backgrounds. Grasping of the object was achieved in 91.1% of the cases and has been evaluated with a robot simulator confirming the performance of the algorithm.
Lin C-H, Faisal AA, 2018, Decomposing sensorimotor variability changes in ageing and their connection to falls in older people, Scientific Reports, Vol: 8, ISSN: 2045-2322
The relationship between sensorimotor variability and falls in older people has not been well investigated. We developed a novel task having shared biomechanics of obstacle negotiation to quantify sensorimotor variability related to locomotion across age. We found that sensorimotor variability in foot placement increases continuously with age. We then applied sensory psychophysics to pinpoint the visual and somatosensory systems associated with sensorimotor variability. We showed increased sensory variability, specifically increased proprioceptive variability, the vital cause of more variable foot placement in older people (greater than 65 years). Notably, older participants relied more on the vision to judge their own foot’s height compared to the young, suggesting a shift in multisensory integration strategy to compensate for degenerated proprioception. We further modelled the probability of tripping-over based on the relationship between sensorimotor variability and age and found a correspondence between model prediction and community-based data. We reveal increased sensorimotor variability, modulated by sensation precision, a potentially vital mechanism of raised tripping-over and thus fall events in older people. Analysis of sensorimotor variability and its specific components, may have the utility of fall risk and rehabilitation target evaluation.
Subramanian M, Shafti A, Faisal A, Mechanomyography based closed-loop Functional Electrical Stimulation cycling system, BioRob 2018- IEEE International Conference on Biomedical Robotics and Biomechatronics
Auepanwiriyakul C, Harston A, Orlov P, et al., 2018, Semantic Fovea: Real-time annotation of ego-centric videos with gaze context
© 2018 Copyright held by the owner/author(s). Visual context plays a crucial role in understanding human visual attention in natural, unconstrained tasks - the objects we look at during everyday tasks provide an indicator of our ongoing attention. Collection, interpretation, and study of visual behaviour in unconstrained environments therefore is necessary, however presents many challenges, requiring painstaking hand-coding. Here we demonstrate a proof-of-concept system that enables real-time annotation of objects in an egocentric video stream from head-mounted eye-tracking glasses. We concurrently obtain a live stream of user gaze vectors with respect to their own visual field. Even during dynamic, fast-paced interactions, our system was able to recognise all objects in the user’s field-of-view with moderate accuracy. To validate our concept, our system was used to annotate an in-lab breakfast scenario in real time.
Orlov P, Shafti A, Auepanwiriyakul C, et al., 2018, A Gaze-contingent Intention Decoding Engine for human augmentation
© 2018 Copyright held by the owner/author(s). Humans process high volumes of visual information to perform everyday tasks. In a reaching task, the brain estimates the distance and position of the object of interest, to reach for it. Having a grasp intention in mind, human eye-movements produce specific relevant patterns. Our Gaze-Contingent Intention Decoding Engine uses eye-movement data and gaze-point position to indicate the hidden intention. We detect the object of interest using deep convolution neural networks and estimate its position in a physical space using 3D gaze vectors. Then we trigger the possible actions from an action grammar database to perform an assistive movement of the robotic arm, improving action performance in physically disabled people. This document is a short report to accompany the Gaze-contingent Intention Decoding Engine demonstrator, providing details of the setup used and results obtained.
Ruiz Maymo M, Shafti S, Faisal AA, FastOrient: lightweight computer vision for wrist control in assistive robotic grasping, The 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Publisher: IEEE
Wearable and Assistive robotics for human graspsupport are broadly either tele-operated robotic arms or actthrough orthotic control of a paralyzed user’s hand. Suchdevices require correct orientation for successful and efficientgrasping. In many human-robot assistive settings, the end-useris required to explicitly control the many degrees of freedommaking effective or efficient control problematic. Here we aredemonstrating the off-loading of low-level control of assistiverobotics and active orthotics, through automatic end-effectororientation control for grasping. This paper describes a compactalgorithm implementing fast computer vision techniques toobtain the orientation of the target object to be grasped, bysegmenting the images acquired with a camera positioned ontop of the end-effector of the robotic device. The rotation neededthat optimises grasping is directly computed from the object’sorientation. The algorithm has been evaluated in 6 differentscene backgrounds and end-effector approaches to 26 differentobjects. 94.8% of the objects were detected in all backgrounds.Grasping of the object was achieved in 91.1% of the casesand has been evaluated with a robot simulator confirming theperformance of the algorithm.
Cunningham J, Hapsari A, Guilleminot P, et al., The supernumerary robotic 3rd thumb for skilled music tasks, The 7th IEEE RAS/EMBS International Conference on Biomedical Robotics and Biomechatronics, Publisher: IEEE
Wearable robotics bring the opportunity to augmenthuman capability and performance, be it through prosthetics,exoskeletons, or supernumerary robotic limbs. The latterconcept allows enhancing human performance and assistingthem in daily tasks. An important research question is, however,whether the use of such devices can lead to their eventualcognitive embodiment, allowing the user to adapt to them anduse them seamlessly as any other limb of their own. This paperdescribes the creation of a platform to investigate this. Oursupernumerary robotic 3rd thumb was created to augment pianoplaying, allowing a pianist to press piano keys beyond theirnatural hand-span; thus leading to functional augmentation oftheir skills and the technical feasibility to play with 11 fingers.The robotic finger employs sensors, motors, and a humaninterfacing algorithm to control its movement in real-time. Aproof of concept validation experiment has been conducted toshow the effectiveness of the robotic finger in playing musicalpieces on a grand piano, showing that naive users were able touse it for 11 finger play within a few hours.
Lluch Hernandez A, Hernando Melia C, 2018, Foreword, Publisher: FUTURE MEDICINE LTD, ISSN: 1479-6694
Ortega P, Colas C, Faisal A, 2018, Convolutional neural network, personalised, closed-loop Brain-Computer Interfaces for multi-way control mode switching in real-time, Publisher: Cold Spring Harbor Laboratory
<jats:p>Exoskeletons and robotic devices are for many motor disabled people the only way to interact with their environment. Our lab previously developed a gaze guided assistive robotic system for grasping.It is well known that the same natural task can require different interactions described by different dynamical systems that would require different robotic controllers and their selection by the user in a self paced way. Therefore, we investigated different ways to achieve transitions between multiple states, finding that eye blinks were the most reliable to transition from ‘off’ to ‘control’ modes (binary classification) compared to voice and electromyography.In this paper be expanded on this work by investigating brain signals as sources for control mode switching.We developed a Brain Computer Interface (BCI) that allows users to switch between four control modes in self paced way in real time.Since the system is devised to be used in domestic environments in a user friendly way, we selected non-invasive electroencephalographic (EEG) signals and convolutional neural networks (ConvNets), known by their capability to find the optimal features for a classification task, which we hypothesised would add flexibility to the system in terms of which mental activities the user could perform to control it. We tested our system using the Cybathlon BrainRunners computer game, which represents all the challenges inherent to real time control.Our preliminary results show that an efficient architecture (SmallNet) composed by a convolutional layer, a fully connected layer and a sigmoid classification layer, is able to classify 4 mental activities that the user chose to perform.For his preferred mental activities, we run and validated the system online and retrained the system using online collected EEG data. We achieved 47,6% accuracy in online operation in the 4-way classification task.In particular we found that models trained with online collec
Ponferrada EG, Sylaidi A, Aldo Faisal A, 2018, Data-efficient motor imagery decoding in real-time for the cybathlon brain-computer interface race, Pages: 21-32
Copyright © 2018 by SCITEPRESS - Science and Technology Publications, Lda. All rights reserved Neuromotor diseases such as Amyotrophic Lateral Sclerosis or Multiple Sclerosis affect millions of people throughout the globe by obstructing body movement and thereby any instrumental interaction with the world. Brain Computer Interfaces (BCIs) hold the premise of re-routing signals around the damaged parts of the nervous system to restore control. However, the field still faces open challenges in training and practical implementation for real-time usage which hampers its impact on patients. The Cybathlon Brain-Computer Interface Race promotes the development of practical BCIs to facilitate clinical adoption. In this work we present a competitive and data-efficient BCI system to control the Cybathlon video game using motor imageries. The platform achieves substantial performance while requiring a relatively small amount of training data, thereby accelerating the training phase. We employ a static band-pass filter and Common Spatial Patterns learnt using supervised machine learning techniques to enable the discrimination between different motor imageries. Log-variance features are extracted from the spatio-temporally filtered EEG signals to fit a Logistic Regression classifier, obtaining satisfying levels of decoding accuracy. The systems performance is evaluated online, on the first version of the Cybathlon Brain Runners game, controlling 3 commands with up to 60.03% accuracy using a two-step hierarchical classifier.
Xiloyannis M, Gavriel C, Thomik AA, et al., 2017, Gaussian Process Autoregression for Simultaneous Proportional Multi-Modal Prosthetic Control with Natural Hand Kinematics, IEEE Transactions on Neural Systems and Rehabilitation Engineering, Vol: 25, Pages: 1785-1801, ISSN: 1558-0210
Matching the dexterity, versatility and robustness ofthe human hand is still an unachieved goal in bionics, roboticsand neural engineering. A major limitation for hand prostheticslies in the challenges of reliably decoding user intention frommuscle signals when controlling complex robotic hands. Mostof the commercially available prosthetic hands use musclerelatedsignals to decode a finite number of predefined motionsand some offer proportional control of open/close movementsof the whole hand. Here, in contrast, we aim to offer usersflexible control of individual joints of their artificial hand. Wepropose a novel framework for decoding neural information thatenables a user to independently control 11 joints of the handin a continuous manner - much like we control our naturalhands. Towards this end, we instructed 6 able-bodied subjectsto perform everyday object manipulation tasks combining bothdynamic, free movements (e.g. grasping) and isometric forcetasks (e.g. squeezing). We recorded the electromyographic (EMG)and mechanomyographic (MMG) activities of 5 extrinsic musclesof the hand in the forearm, while simultaneously monitoring11 joints of hand and fingers using a sensorised data glovethat tracked the joints of the hand. Instead of learning justa direct mapping from current muscle activity to intendedhand movement, we formulated a novel autoregressive approachthat combines the context of previous hand movements withinstantaneous muscle activity to predict future hand movements.Specifically, we evaluated a linear Vector AutoRegressive MovingAverage model with Exogenous inputs (VARMAX) and a novelGaussian Process (GP) autoregressive framework to learn thecontinuous mapping from hand joint dynamics and muscleactivity to decode intended hand movement. Our GP approachachieves high levels of performance (RMSE of 8°/s and ⇢ = 0.79).Crucially, we use a small set of sensors that allows us to controla larger set of independently actuated degrees of freedom ofa hand. Thi
Iyer R, Ungless M, Faisal AA, 2017, Calcium-activated SK channels control firing regularity by modulating sodium channel availability in midbrain dopamine neurons, Scientific Reports, Vol: 2017, ISSN: 2045-2322
Dopamine neurons in the substantia nigra pars compacta and ventral tegmental area regulate behaviours such as reward-related learning, and motor control. Dysfunction of these neurons is implicated in Schizophrenia, addiction to drugs, and Parkinson’s disease. While some dopamine neurons fire single spikes at regular intervals, others fire irregular single spikes interspersed with bursts. Pharmacological inhibition of calcium-activated potassium (SK) channels increases the variability in their firing pattern, sometimes also increasing the number of spikes fired in bursts, indicating that SK channels play an important role in maintaining dopamine neuron firing regularity and burst firing. However, the exact mechanisms underlying these effects are still unclear. Here, we develop a biophysical model of a dopamine neuron incorporating ion channel stochasticity that enabled the analysis of availability of ion channels in multiple states during spiking. We find that decreased firing regularity is primarily due to a significant decrease in the AHP that in turn resulted in a reduction in the fraction of available voltage-gated sodium channels due to insufficient recovery from inactivation. Our model further predicts that inhibition of SK channels results in a depolarisation of action potential threshold along with an increase in its variability.
Maimon-Mor RO, Fernandez-Quesada J, Zito GA, et al., Towards free 3D end-point control for robotic-assisted human reaching using binocular eye tracking, 15th IEEE Conference on Rehabilitation Robotics (ICORR 2017), Publisher: IEEE
Eye-movements are the only directly observablebehavioural signals that are highly correlated with actions atthe task level, and proactive of body movements and thus reflectaction intentions. Moreover, eye movements are preserved inmany movement disorders leading to paralysis (or amputees)from stroke, spinal cord injury, Parkinson’s disease, multiplesclerosis, and muscular dystrophy among others. Despite thisbenefit, eye tracking is not widely used as control interface forrobotic interfaces in movement impaired patients due to poorhuman-robot interfaces. We demonstrate here how combining3D gaze tracking using our GT3D binocular eye tracker withcustom designed 3D head tracking system and calibrationmethod enables continuous 3D end-point control of a roboticarm support system. The users can move their own hand to anylocation of the workspace by simple looking at the target andwinking once. This purely eye tracking based system enablesthe end-user to retain free head movement and yet achieves highspatial end point accuracy in the order of 6 cm RMSE error ineach dimension and standard deviation of 4 cm. 3D calibrationis achieved by moving the robot along a 3 dimensional spacefilling Peano curve while the user is tracking it with theireyes. This results in a fully automated calibration procedurethat yields several thousand calibration points versus standardapproaches using a dozen points, resulting in beyond state-of-the-art 3D accuracy and precision.
Noronha B, Dziemian S, Zito GA, et al., "Wink to grasp" – comparing Eye, Voice & EMG gesture control ofgrasp with soft-robotic gloves, IEEE Conference on Rehabilitation Robotics (ICORR 2017), Publisher: IEEE
The ability of robotic rehabilitation devices tosupport paralysed end-users is ultimately limited by the degreeto which human-machine-interaction is designed to be effectiveand efficient in translating user intention into robotic action.Specifically, we evaluate the novel possibility of binocular eye-tracking technology to detect voluntary winks from involuntaryblink commands, to establish winks as a novel low-latencycontrol signal to trigger robotic action. By wearing binoculareye-tracking glasses we enable users to directly observe theirenvironment or the actuator and trigger movement actions,without having to interact with a visual display unit or userinterface. We compare our novel approach to two conventionalapproaches for controlling robotic devices based on electromyo-graphy (EMG) and speech-based human-computer interactiontechnology. We present an integrated software framework basedon ROS that allows transparent integration of these multiplemodalities with a robotic system. We use a soft-robotic SEMglove (Bioservo Technologies AB, Sweden) to evaluate how the 3modalities support the performance and subjective experienceof the end-user when movement assisted. All 3 modalitiesare evaluated in streaming, closed-loop control operation forgrasping physical objects. We find that wink control showsthe lowest error rate mean with lowest standard deviation of(0.23±0.07, mean±SEM) followed by speech control (0.35±0.13) and EMG gesture control (using the Myo armband byThalamic Labs), with the highest mean and standard deviation(0.46±0.16). We conclude that with our novel own developedeye-tracking based approach to control assistive technologies isa well suited alternative to conventional approaches, especiallywhen combined with 3D eye-tracking based robotic end-pointcontrol.
Kotti M, Duffell LD, Faisal AA, et al., 2017, Detecting knee osteoarthritis and its discriminating parameters using random forests, Medical Engineering and Physics, Vol: 43, Pages: 19-29, ISSN: 1350-4533
This papertackles the problem of automatic detection of knee osteoarthritis. A computer system is built that takes as input the body kinetics and produces as output not only an estimation of presence of the knee osteoarthritis,as previouslydone inthe literature, but alsothe most discriminating parameters along with a set of rules on how this decision was reached.This fills the gap of interpretability between the medical and the engineering approaches. We collected locomotion data from 47 subjects with knee osteoarthritis and 47 healthy subjects.Osteoarthritis subjects were recruited from hospital clinics and GP surgeries, and age and sex matched heathy subjects from the local community. Subjects walked on a walkway equippedwith two force plates with piezoelectric 3-component force sensors. Parameters of the vertical, anterior-posterior, and medio-lateral ground reaction forces, such asmean value, push-off time, and slope,were extracted. Then random forest regressors map thoseparameters via rule induction to the degree of knee osteoarthritis.To boost generalisation ability,a subject-independent protocol is employed.The 5-fold cross-validated accuracy is 72.61%±4.24%. We show that with 3 steps or lessa reliable clinical measure can be extractedin a rule-based approachwhen the dataset is analysed appropriately.
Makin TR, De Vignemont F, Faisal AA, 2017, Neurocognitive barriers to the embodiment of technology, Nature Biomedical Engineering, Vol: 1, ISSN: 2157-846X
Pedotti A, Azevedo L, Faisal A, 2017, Foreword, Pages: VII-VIII
Faisal AA, Neishabouri A, 2016, Fundamental Constraints on the Evolution of Neurons, The Wiley-Blackwell Handbook of Evolutionary Neuroscience, Pages: 153-172, ISBN: 9781119994695
© 2017 John Wiley & Sons, Ltd. All rights reserved. This chapter focuses on two fundamental constraints that apply to any form of information processing system, be it a cell, a brain or a computer: Noise (random variability) and Energy (metabolic demand). It shows how these two constraints are fundamentally limited by the basic biophysical properties of the brain's building blocks (protein, fats, and salty water) and link nervous system structure to function. The understanding of the interdependence of information and energy has profoundly influenced the development of efficient telecommunication systems and computers. Noise diminishes the capacity to receive, process, and direct information, the key tasks of the brain. Investing in the brain's design can reduce the effects of noise, but this investment often increases energetic requirements, which is likely to be evolutionary unfavourable. The stochasticity of the system becomes critical when its inherent randomness makes it operationally infeasible, that is, when random action potential (APs) become as common as evoked APs.
Makin T, de Vignemont F, Faisal AA, Neurocognitive considerations to the embodiment of technology, Nature Biomedical Engineering, ISSN: 2157-846X
By exploiting robotics and information technology, teams of biomedical engineers are enhancing human sensory and motor abilities. Such augmentation technology ― to be worn, implanted or ingested ― aims to both restore and improve existing human capabilities (such as faster running, via exoskeletons), and to add new ones (for example, a ‘radar sense’). The development of augmentation technology is driven by rapid advances in human–machine interfaces, energy storage and mobile computing. Although engineers are embracing body augmentation from a technical perspective, little attention has been devoted to how the human brain might support such technological innovation. In this Comment, we highlight expected neurocognitive bottlenecks imposed by brain plasticity, adaptation and learning that could impact the design and performance of sensory and motor augmentation technology. We call for further consideration of how human–machine integration can be best achieved.
Corrales-Carvajal VM, Faisal AA, Ribeiro C, 2016, Internal states drive nutrient homeostatis by modulating exploration-exploitation trade-off, ELIFE, Vol: 5, ISSN: 2050-084X
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.