Imperial College London

Professor Aldo Faisal

Faculty of EngineeringDepartment of Bioengineering

Professor of AI & Neuroscience
 
 
 
//

Contact

 

+44 (0)20 7594 6373a.faisal Website

 
 
//

Assistant

 

Miss Teresa Ng +44 (0)20 7594 8300

 
//

Location

 

4.08Royal School of MinesSouth Kensington Campus

//

Summary

 

Publications

Citation

BibTex format

@inproceedings{Subramanian:2021:10.1109/NER49283.2021.9441218,
author = {Subramanian, M and Park, S and Orlov, P and Shafti, A and Faisal, A},
doi = {10.1109/NER49283.2021.9441218},
publisher = {IEEE},
title = {Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform},
url = {http://dx.doi.org/10.1109/NER49283.2021.9441218},
year = {2021}
}

RIS format (EndNote, RefMan)

TY  - CPAPER
AB - We have pioneered the Where-You-Look-Is Where-You-Go approach to controlling mobility platforms by decoding how the user looks at the environment to understand where they want to navigate their mobility device. However, many natural eye-movements are not relevant for action intention decoding, only some are, which places a challenge on decoding, the so-called Midas Touch Problem. Here, we present a new solution, consisting of 1. deep computer vision to understand what object a user is looking at in their field of view, with 2. an analysis of where on the object’s bounding box the user is looking, to 3. use a simple machine learning classifier to determine whether the overt visual attention on the object is predictive of a navigation intention to that object. Our decoding system ultimately determines whether the user wants to drive to e.g., a door or just looks at it. Crucially, we find that when users look at an object and imagine they were moving towards it, the resulting eye-movements from this motor imagery (akin to neural interfaces) remain decodable. Once a driving intention and thus also the location is detected our system instructs our autonomous wheelchair platform, the A. Eye-Drive, to navigate to the desired object while avoiding static and moving obstacles. Thus, for navigation purposes, we have realised a cognitive-level human interface, as it requires the user only to cognitively interact with the desired goal, not to continuously steer their wheelchair to the target (low-level human interfacing).
AU - Subramanian,M
AU - Park,S
AU - Orlov,P
AU - Shafti,A
AU - Faisal,A
DO - 10.1109/NER49283.2021.9441218
PB - IEEE
PY - 2021///
TI - Gaze-contingent decoding of human navigation intention on an autonomous wheelchair platform
UR - http://dx.doi.org/10.1109/NER49283.2021.9441218
UR - https://ieeexplore.ieee.org/document/9441218
UR - http://hdl.handle.net/10044/1/87612
ER -