Imperial College London

Dr Dan Goodman

Faculty of EngineeringDepartment of Electrical and Electronic Engineering

Lecturer
 
 
 
//

Contact

 

+44 (0)20 7594 6264d.goodman Website

 
 
//

Location

 

1001Electrical EngineeringSouth Kensington Campus

//

Summary

 

Publications

Publication Type
Year
to

36 results found

Goodman DFM, Stimberg M, Brette R, 2008, Brian simulator

Brian is a simulator for spiking neural networks available on almost all platforms. The motivation for this project is that a simulator should not only save the time of processors, but also the time of scientists.Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages.

Software

Goodman D, 2006, Spirals in the boundary of slices of quasi-fuchsian space, Conformal Geometry and Dynamics, Vol: 10, Pages: 136-158

We prove that the Bers and Maskit slices of the quasi-Fuchsian space of a once-punctured torus have a dense, uncountable set of points in their boundaries about which the boundary spirals infinitely. © 2006 American Mathematical Society.

Journal article

Goodman DFM, Winter IM, Léger AC, de Cheveigné A, Lorenzi Cet al., Modelling firing regularity in the ventral cochlear nucleus: mechanisms, and effects of stimulus level and synaptopathy

<jats:title>Abstract</jats:title><jats:p>The auditory system processes temporal information at multiple scales, and disruptions to this temporal processing may lead to deficits in auditory tasks such as detecting and discriminating sounds in a noisy environment. Here, a modelling approach is used to study the temporal regularity of firing by chopper cells in the ventral cochlear nucleus, in both the normal and impaired auditory system. Chopper cells, which have a strikingly regular firing response, divide into two classes, sustained and transient, based on the time course of this regularity. Several hypotheses have been proposed to explain the behaviour of chopper cells, and the difference between sustained and transient cells in particular. However, there is no conclusive evidence so far. Here, a reduced mathematical model is developed and used to compare and test a wide range of hypotheses with a limited number of parameters. Simulation results show a continuum of cell types and behaviours: chopper-like behaviour arises for a wide range of parameters, suggesting that multiple mechanisms may underlie this behaviour. The model accounts for systematic trends in regularity as a function of stimulus level that have previously only been reported anecdotally. Finally, the model is used to predict the effects of a reduction in the number of auditory nerve fibres (deafferentation due to, for example, cochlear synaptopathy). An interactive version of this paper in which all the model parameters can be changed is available online.</jats:p><jats:sec><jats:title>Highlights</jats:title><jats:p><jats:list list-type="bullet"><jats:list-item><jats:p>A low parameter model reproduces chopper cell firing regularity</jats:p></jats:list-item><jats:list-item><jats:p>Multiple factors can account for sustained vs transient chopper cell response</jats:p></jats:list-item><jats:

Journal article

Steadman MA, Kim C, Lestang J-H, Goodman DFM, Picinali Let al., Short-term effects of sound localization training in virtual reality

<jats:title>ABSTRACT</jats:title><jats:p>Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain’s ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements (“gamification”) and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion (“active listening”). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.</jats:p>

Journal article

Stimberg M, Brette R, Goodman DFM, Brian 2: an intuitive and efficient neural simulator

<jats:title>Abstract</jats:title><jats:p>To be maximally useful for neuroscience research, neural simulators must make it possible to define original models. This is especially important because a computational experiment might not only need descriptions of neurons and synapses, but also models of interactions with the environment (e.g. muscles), or the environment itself. To preserve high performance when defining new models, current simulators offer two options: low-level programming, or mark-up languages (and other domain specific languages). The first option requires time and expertise, is prone to errors, and contributes to problems with reproducibility and replicability. The second option has limited scope, since it can only describe the range of neural models covered by the ontology. Other aspects of a computational experiment, such as the stimulation protocol, cannot be expressed within this framework. “Brian” 2 is a complete rewrite of Brian that addresses this issue by using runtime code generation with a procedural equation-oriented approach. Brian 2 enables scientists to write code that is particularly simple and concise, closely matching the way they conceptualise their models, while the technique of runtime code generation automatically transforms high level descriptions of models into efficient low level code tailored to different hardware (e.g. CPU or GPU). We illustrate it with several challenging examples: a plastic model of the pyloric network of crustaceans, a closed-loop sensorimotor model, programmatic exploration of a neuron model, and an auditory model with real-time input from a microphone.</jats:p>

Journal article

Lestang J-H, Goodman DFM, Canonical brain computations account for perceived sound source location

<jats:p>Sound localization in reverberant environments is a difficult task that human listeners perform effortlessly. Many neural mechanisms have been proposed to account for this behavior. Generally they rely on emphasizing localization information at the onset of the incoming sound while discarding localization cues that arrive later. We modelled several of these mechanisms using neural circuits commonly found in the brain and tested their performance in the context of experiments showing that, in the dominant frequency region for sound localisation, we have a preference for auditory cues arriving during the rising slope of the sound energy (Dietz et al., 2013). We found that both single cell mechanisms (onset and adaptation) and population mechanisms (lateral inhibition) were easily able to reproduce the results across a very wide range of parameter settings. This suggests that sound localization in reverberant environments may not require specialised mechanisms specific to that task, but may instead rely on common neural circuits in the brain. This is in line with the theory that the brain consists of functionally overlapping general purpose mechanisms rather than a collection of mechanisms each highly specialised to specific tasks. This research is fully reproducible, and we made our code available to edit and run online via interactive live notebooks.</jats:p>

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: http://wlsprd.imperial.ac.uk:80/respub/WEB-INF/jsp/search-html.jsp Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: id=00741013&limit=30&person=true&page=2&respub-action=search.html