Brain Computer Interface
Brain Computer Interface (BCI) forges a direct, online communication between brain and machine, independent from the user's physical abilities and represents a new way to augment human capabilities. They translate the user's intentions into outputs or actions by means of machine learning techniques. BCI operates either by presenting a stimulus to the operator and waiting for his/her response (synchronous), or continuously monitoring the operator's cognitive activity and responding accordingly (asynchronous). BCI can otherwise be classified as active, reactive and passive.
Active BCI derives its outputs from brain activity, which is directly and consciously controlled by the user, not necessarily depending on external events, for controlling an application. Reactive BCIs are used for sending commands by focusing on specific stimuli provided by the system that evoke known brain responses when perceived. Passive BCI is a relatively newer concept, which derives its outputs from arbitrary brain activity arising without the purpose of voluntary control, for enriching human-machine interaction with implicit information on the actual user state.
Recent developments in sensing and wireless technologies, as well as signal processing and machine learning methods allow for real-time cognitive state monitoring of mental workload, mental fatigue and attention in real-world settings, which are particularly useful for passive BCI. Our work in BCI is mainly focussed on perceptual, motor and rehabilitative activities in unrestricted environments, typically involving relatively complex tasks, based on electroencephalography (EEG), functional near-infrared spectroscopy (fNIRS), and video-oculography. Our aim is through the development of BCI technologies to improve safety in human-robot interactions, improve complex skills (such as surgery) training, enhance neuro-rehabilitation, and development of novel assistive technologies for the ageing population.
Gaze-contingent robotic control
To develop self-calibrating and adaptive eye tracking techniques for seamless control of robots for collaborative tasks such as gaze-contingent motor channelling, learning and cooperative task execution.
Human robot interaction
To pursue the perceptual docking concept pioneered by Prof Guang-Zhong Yang for in situ operator specific motor and cognitive learning and to develop effective cooperative, shared-control methodologies for medical robots.
Perception and neuro-ergonomics
To assess cortical activities related to complex tasks (mainly in surgery) with applications to cognitive load assessment, skill-related cortical signatures, fatigue and hypovigilance detection, workflow assessment and neural-feedback.
To develop assistive and robot technologies for optimising rehabilitation protocols, quantitative assessment of baseline and motor re-organisation, establish neural feedback to enhance cognitive and motor performance, and promote independence, quality of life and social interaction of the ageing population and those with neurocognitive decline.