EEG assessment of spoken language processing in aphasia
Tobias Reichenbach (Bioengineering)
Rob Leech (Medicine)
Etienne Burdet (Bioengineering)
Richard Wise (Medicine)
Our auditory environment is highly complex: different speakers often talk at the same time, music plays in the background, cars drive by. Our central nervous system is highly effective in analyzing such an auditory scene; for example, we can easily understand a speaker despite background noise.
A range of neurological disorders can, however, impair the cognitive processes necessary to parse an acoustic scene and hence significantly impair a person's life. Both the brain's auditory processing as well as the associated disorder remain poorly understood.
The graduate project will develop methods to assess the brain's processing of complex auditory signals such as speech and music through noninvasive electroencephalographic (EEG) recordings. The methods will be used to diagnose patients with brain injury whose auditory processing is impaired.
As an example, the project will study patients with aphasic stroke that affects brain regions for communication and language. The research will help to better understand the neurological basis of such disorder and help to develop novel rehabilitation strategies.
The project will involve EEG data acquisition, advanced data analysis, machine learning, computational modeling and clinical research.