Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@article{Ortega:2021:1741-2552/ac1ab3,
author = {Ortega, P and Faisal, A},
doi = {1741-2552/ac1ab3},
journal = {Journal of Neural Engineering},
pages = {1--21},
title = {Deep Learning multimodal fNIRS & EEG signals for bimanual grip force decoding},
url = {http://dx.doi.org/10.1088/1741-2552/ac1ab3},
volume = {18},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - JOUR
AB - Objective Non-invasive BMI offer an alternative, safe and accessible way to interact with the environment. To enable meaningful and stable physical interactions, BMIs need to decode forces. Although previously addressed in the unimanual case, controlling forces from both hands would enable BMI-users to perform a greater range of interactions. We here investigate the decoding of hand-specific forces. Approach We maximise cortical information by using EEG and fNIRS and developing a deep-learning architecture with attention and residual layers (cnnatt) to improve their fusion. Our task required participants to generate hand-specific force profiles on which we trained and tested our deep-learning and linear decoders. Main results The use of EEG and fNIRS improved the decoding of bimanual force and the deep-learning models outperformed the linear model. In both cases, the greatest gain in performance was due to the detection of force generation. In particular, the detection of forces was hand-specific and better for the right dominant hand and cnnatt was better at fusing EEG and fNIRS. Consequently, the study of cnnatt revealed that forces from each hand were differently encoded at the cortical level. cnnatt also revealed traces of the cortical activity being modulated by the level of force which was not previously found using linear models. Significance Our results can be applied to avoid hand-cross talk during hand force decoding to increase the robustness of BMI robotic devices. In particular, we improve the fusion of EEG and fNIRS signals and offer hand-specific interpretability of the encoded forces which are valuable during motor rehabilitation assessment.
AU - Ortega,P
AU - Faisal,A
DO - 1741-2552/ac1ab3
EP - 21
PY - 2021///
SN - 1741-2560
SP - 1
TI - Deep Learning multimodal fNIRS & EEG signals for bimanual grip force decoding
T2 - Journal of Neural Engineering
UR - http://dx.doi.org/10.1088/1741-2552/ac1ab3
UR - https://iopscience.iop.org/article/10.1088/1741-2552/ac1ab3/meta
UR - http://hdl.handle.net/10044/1/90937
VL - 18
ER -