Richard Turner, Department of Engineering, University of Cambridge
Gaussian Processes for Auditory Neuroscience
In this talk I will present work that uses a beautiful modern machine learning tool, called a Gaussian process, for two distinct applications in the area of auditory neuroscience: audio texture generation and intelligent hearing tests.
I will begin by giving an intuitive tutorial on Gaussian processes, which are a generalisation of the Gaussian distribution that can handle infinite numbers of variables. I will then describe a simple probabilistic model for audio that utilises Gaussian processes in order to capture the low level statistics of sounds. Surprisingly, the method is an excellent description for naturally occurring audio textures such as howling wind, falling rain, and running water. I will speculate how inference in this model can be connected to auditory scene analysis. In the last part of the talk, I will talk about recent work that has used to Gaussian Processes to develop improved automatic listening tests that actively learn about a patient’s hearing loss, selecting stimuli to present at each trial that will provide the largest expected information gain. The methods process patient responses using online Bayesian inference in order to make efficient use of data and quantify uncertainty in the diagnosis.
Daniel Bendor, Division of Psychology and Language Sciences, UCL
The role of inhibition in auditory cortex for encoding temporal information
In auditory cortex, temporal information within a sound is represented by two complementary neural codes: a temporal representation based on stimulus-locked firing and a rate representation, where discharge rate co-varies with the timing between acoustic events but lacks a stimulus-synchronized response. Using a computational neuronal model, we find that stimulus-locked responses are generated when sound-evoked excitation is combined with strong, delayed inhibition. In contrast to this, a non-synchronized rate representation is generated when the net excitation evoked by the sound is weak, which occurs when excitation is coincident and balanced with inhibition. Using single-unit recordings from awake marmosets (Callithrix jacchus), we validate several model predictions, including differences in the temporal fidelity, discharge rates and temporal dynamics of stimulus-evoked responses between neurons with rate and temporal representations. Together these data suggest that feedforward inhibition provides a parsimonious explanation of the neural coding dichotomy observed in auditory cortex.