36 results found
Stimberg M, Brette R, Goodman DFM, Brian 2, an intuitive and efficient neural simulator, eLife, Vol: 8, ISSN: 2050-084X
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.</jats:p>
Zheng JX, Pawar S, Goodman DFM, 2019, Further towards unambiguous edge bundling: Investigating power-confluentdrawings for network visualization.
Bach et al.  recently presented an algorithm for constructing confluentdrawings, by leveraging power graph decomposition to generate an auxiliaryrouting graph. We identify two problems with their method and offer a singlesolution to solve both. We also classify the exact type of confluent drawingsthat the algorithm can produce as 'power-confluent', and prove that it is asubclass of the previously studied 'strict confluent' drawing. A descriptionand source code of our implementation is also provided, which additionallyincludes an improved method for power graph construction.
Goodman D, Stimberg M, Brette R, 2019, Brian 2
Brian 2A clock-driven simulator for spiking neural networksBrian is a free, open source simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible.Documentation for Brian2 can be found at http://brian2.readthedocs.orgThe code is developed here: https://github.com/brian-team/brian2/Brian2 is released under the terms of the CeCILL 2.1 license.If you use Brian for your published research, we suggest that you cite one of our introductory articles:Goodman DFM and Brette R (2009). The Brian simulator. Front Neurosci doi: 10.3389/neuro.01.026.2009Stimberg M, Goodman DFM, Benichoux V, Brette R (2014). Equation-oriented specification of neural models for simulations. Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
Engel Alonso-Martinez I, Goodman D, Picinali L, The Effect of Auditory Anchors on Sound Localization: A Preliminary Study, 2019 AES International Conference on Immersive and Interactive Audio
Blundell I, Brette R, Cleland TA, et al., 2018, Code generation in computational neuroscience: A review of tools and techniques, Frontiers in Neuroinformatics, Vol: 12, ISSN: 1662-5196
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number
Stimberg M, Goodman DFM, Nowotny T, 2018, Brian2GeNN: a system for accelerating a large variety of spiking neural networks with graphics hardware, Publisher: Cold Spring Harbor Laboratory
<jats:p>“Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNN is a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance grade graphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems so that users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technical knowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brian scripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators. From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown that using Brian2GeNN, typical models can run tens to hundreds of times faster than on CPU.</jats:p>
Hathway P, Goodman DFM, 2018, [Re] Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains, ReScience, Vol: 4, ISSN: 2430-3658
Zheng JX, Pawar S, Goodman DFM, 2018, Graph Drawing by Stochastic Gradient Descent
A popular method of force-directed graph drawing is multidimensional scalingusing graph-theoretic distances as input. We present an algorithm to minimizeits energy function, known as stress, by using stochastic gradient descent(SGD) to move a single pair of vertices at a time. Our results show that SGDcan reach lower stress levels faster and more consistently than majorization,without needing help from a good initialization. We then show how the uniqueproperties of SGD make it easier to produce constrained layouts than previousapproaches. We also show how SGD can be directly applied within the sparsestress approximation of Ortmann et al. , making the algorithm scalable up tolarge graphs.
Kim C, Steadman M, Lestang JH, et al., 2018, A VR-based mobile platform for training to non-individualized binaural 3D audio, 144th Audio Engineering Society Convention 2018
© 2018 Audio Engineering Society. All Rights Reserved. Delivery of immersive 3D audio with arbitrarily-positioned sound sources over headphones often requires processing of individual source signals through a set of Head-Related Transfer Functions (HRTFs). The individual morphological differences and the impracticality of HRTF measurement make it difficult to deliver completely individualized 3D audio, and instead lead to the use of previously-measured non-individual sets of HRTFs. In this study, a VR-based mobile sound localization training prototype system is introduced which uses HRTF sets for audio. It consists of a mobile phone as a head-mounted device, a hand-held Bluetooth controller, and a network-enabled laptop with a USB audio interface and a pair of headphones. The virtual environment was developed on the mobile phone such that the user can listen-to/navigate-in an acoustically neutral scene and locate invisible target sound sources presented at random directions using non-individualized HRTFs in repetitive sessions. Various training paradigms can be designed with this system, with performance-related feedback provided according to the user’s localization accuracy, including visual indication of the target location, and some aspects of a typical first-person shooting game, such as enemies, scoring, and level advancement. An experiment was conducted using this system, in which 11 subjects went through multiple training sessions, using non-individualized HRTF sets. The localization performance evaluations showed reduction of overall localization angle error over repeated training sessions, reflecting lower front-back confusion rates.
Dietz M, Lestang J-H, Majdak P, et al., 2017, A framework for testing and comparing binaural models, Hearing Research, Vol: 360, Pages: 92-106, ISSN: 0378-5955
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.
Stimberg M, Goodman DFM, Brette R, et al., 2017, Modeling neuron–glia interactions with the Brian 2 simulator, Publisher: Cold Spring Harbor Laboratory
<jats:title>Abstract</jats:title><jats:p>Despite compelling evidence that glial cells could crucially regulate neural network activity, the vast majority of available neural simulators ignores the possible contribution of glia to neuronal physiology. Here, we show how to model glial physiology and neuron-glia interactions in the <jats:italic>Brian 2</jats:italic> simulator. <jats:italic>Brian 2</jats:italic> offers facilities to explicitly describe any model in mathematical terms with limited and simple simulator-specific syntax, automatically generating high-performance code from the user-provided descriptions. The flexibility of this approach allows us to model not only networks of neurons, but also individual glial cells, electrical coupling of glial cells, and the interaction between glial cells and synapses. We therefore conclude that <jats:italic>Brian 2</jats:italic> provides an ideal platform to efficiently simulate glial physiology, and specifically, the influence of astrocytes on neural activity.</jats:p>
Goodman DFM, Winter IM, Léger AC, et al., 2017, Modelling firing regularity in the ventral cochlear nucleus: Mechanisms, and effects of stimulus level and synaptopathy, Hearing Research, Vol: 358, Pages: 98-110, ISSN: 0378-5955
The auditory system processes temporal information at multiple scales, and disruptions to this temporal processing may lead to deficits in auditory tasks such as detecting and discriminating sounds in a noisy environment. Here, a modelling approach is used to study the temporal regularity of firing by chopper cells in the ventral cochlear nucleus, in both the normal and impaired auditory system. Chopper cells, which have a strikingly regular firing response, divide into two classes, sustained and transient, based on the time course of this regularity. Several hypotheses have been proposed to explain the behaviour of chopper cells, and the difference between sustained and transient cells in particular. However, there is no conclusive evidence so far. Here, a reduced mathematical model is developed and used to compare and test a wide range of hypotheses with a limited number of parameters. Simulation results show a continuum of cell types and behaviours: chopper-like behaviour arises for a wide range of parameters, suggesting that multiple mechanisms may underlie this behaviour. The model accounts for systematic trends in regularity as a function of stimulus level that have previously only been reported anecdotally. Finally, the model is used to predict the effects of a reduction in the number of auditory nerve fibres (deafferentation due to, for example, cochlear synaptopathy). An interactive version of this paper in which all the model parameters can be changed is available online.
Goodman DFM, Stimberg M, Brette R, 2016, Brian 2.0 simulator
Brian is a simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible.
Developments in microfabrication technology have enabled the production of neural electrode arrays with hundreds of closely spaced recording sites, and electrodes with thousands of sites are under development. These probes in principle allow the simultaneous recording of very large numbers of neurons. However, use of this technology requires the development of techniques for decoding the spike times of the recorded neurons from the raw data captured from the probes. Here we present a set of tools to solve this problem, implemented in a suite of practical, user-friendly, open-source software. We validate these methods on data from the cortex, hippocampus and thalamus of rat, mouse, macaque and marmoset, demonstrating error rates as low as 5%.
Kadir SN, Goodman DFM, Harris KD, 2014, High-Dimensional Cluster Analysis with the Masked EM Algorithm, Neural Computation, Vol: 26, Pages: 2379-2394, ISSN: 0899-7667
Cluster analysis faces two problems in high dimensions: the "curse of dimensionality" that can lead to overfitting and poor generalization performance and the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of spike sorting for nextgeneration, high-channel-count neural probes. In this problem, only a small subset of features provides information about the cluster membership of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective.We introduce a "masked EM" algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data and to real-world high-channel-count spike sorting data.
Stimberg M, Goodman DF, Benichoux V, et al., 2014, Equation-oriented specification of neural models for simulations, Frontiers in Neuroinformatics, Vol: 8, ISSN: 1662-5196
Simulating biological neuronal networks is a core method of research in computational neuroscience. A full specification of such a network model includes a description of the dynamics and state changes of neurons and synapses, as well as the synaptic connectivity patterns and the initial values of all parameters. A standard approach in neuronal modeling software is to build network models based on a library of pre-defined components and mechanisms; if a model component does not yet exist, it has to be defined in a special-purpose or general low-level language and potentially be compiled and linked with the simulator. Here we propose an alternative approach that allows flexible definition of models by writing textual descriptions based on mathematical notation. We demonstrate that this approach allows the definition of a wide range of models with minimal syntax. Furthermore, such explicit model descriptions allow the generation of executable code for various target languages and devices, since the description is not tied to an implementation. Finally, this approach also has advantages for readability and reproducibility, because the model description is fully explicit, and because it can be automatically parsed and transformed into formatted descriptions. The presented approach has been implemented in the Brian2 simulator.
Goodman DF, Benichoux V, Brette R, 2013, Decoding neural responses to temporal cues for sound localization, eLife, Vol: 2, ISSN: 2050-084X
The activity of sensory neural populations carries information about the environment. This may be extracted from neural activity using different strategies. In the auditory brainstem, a recent theory proposes that sound location in the horizontal plane is decoded from the relative summed activity of two populations in each hemisphere, whereas earlier theories hypothesized that the location was decoded from the identity of the most active cells. We tested the performance of various decoders of neural responses in increasingly complex acoustical situations, including spectrum variations, noise, and sound diffraction. We demonstrate that there is insufficient information in the pooled activity of each hemisphere to estimate sound direction in a reliable way consistent with behavior, whereas robust estimates can be obtained from neural activity by taking into account the heterogeneous tuning of cells. These estimates can still be obtained when only contralateral neural responses are used, consistently with unilateral lesion studies. DOI: http://dx.doi.org/10.7554/eLife.01312.001.
Rossant C, Fontaine B, Goodman DFM, 2013, Playdoh: A lightweight Python library for distributed computing and optimisation, JOURNAL OF COMPUTATIONAL SCIENCE, Vol: 4, Pages: 352-359, ISSN: 1877-7503
Brette R, Goodman DFM, 2012, Simulating spiking neural networks on GPU, NETWORK-COMPUTATION IN NEURAL SYSTEMS, Vol: 23, Pages: 167-182, ISSN: 0954-898X
Fontaine B, Goodman DF, Benichoux V, et al., 2011, Brian hears: online auditory processing using vectorization over channels, Frontiers in Neuroinformatics, Vol: 5, ISSN: 1662-5196
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
Kremer Y, Leger J-F, Goodman D, et al., 2011, Late Emergence of the Vibrissa Direction Selectivity Map in the Rat Barrel Cortex, JOURNAL OF NEUROSCIENCE, Vol: 31, Pages: 10689-10700, ISSN: 0270-6474
Brette R, Goodman DFM, 2011, Vectorized Algorithms for Spiking Neural Network Simulation, NEURAL COMPUTATION, Vol: 23, Pages: 1503-1535, ISSN: 0899-7667
Computational modeling is increasingly used to understand the function of neural circuits in systems neuroscience. These studies require models of individual neurons with realistic input-output properties. Recently, it was found that spiking models can accurately predict the precisely timed spike trains produced by cortical neurons in response to somatically injected currents, if properly fitted. This requires fitting techniques that are efficient and flexible enough to easily test different candidate models. We present a generic solution, based on the Brian simulator (a neural network simulator in Python), which allows the user to define and fit arbitrary neuron models to electrophysiological recordings. It relies on vectorization and parallel computing techniques to achieve efficiency. We demonstrate its use on neural recordings in the barrel cortex and in the auditory brainstem, and confirm that simple adaptive spiking models can accurately predict the response of cortical neurons. Finally, we show how a complex multicompartmental model can be reduced to a simple effective spiking model.
Goodman DFM, Brette R, 2010, Learning to localise sounds with spiking neural networks
To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism's lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back.
Fletcher A, Goodman D, 2010, Quasiregular mappings of polynomial type in R<sup>2</sup>, Conformal Geometry and Dynamics, Vol: 14, Pages: 322-336
Complex dynamics deals with the iteration of holomorphic functions. As is well known, the first functions to be studied which gave non-trivial dynamics were quadratic polynomials, which produced beautiful computer generated pictures of Julia sets and the Mandelbrot set. In the same spirit, this article aims to study the dynamics of the simplest non-trivial quasiregular mappings. These are mappings in R2 which are a composition of a quadratic polynomial and an affine stretch. © 2010 American Mathematical Society.
Goodman DF, Brette R, 2010, Spike-timing-based computation in sound localization, PLOS Computational Biology, Vol: 6, ISSN: 1553-734X
Spike timing is precise in the auditory system and it has been argued that it conveys information about auditory stimuli, in particular about the location of a sound source. However, beyond simple time differences, the way in which neurons might extract this information is unclear and the potential computational advantages are unknown. The computational difficulty of this task for an animal is to locate the source of an unexpected sound from two monaural signals that are highly dependent on the unknown source signal. In neuron models consisting of spectro-temporal filtering and spiking nonlinearity, we found that the binaural structure induced by spatialized sounds is mapped to synchrony patterns that depend on source location rather than on source signal. Location-specific synchrony patterns would then result in the activation of location-specific assemblies of postsynaptic neurons. We designed a spiking neuron model which exploited this principle to locate a variety of sound sources in a virtual acoustic environment using measured human head-related transfer functions. The model was able to accurately estimate the location of previously unknown sounds in both azimuth and elevation (including front/back discrimination) in a known acoustic environment. We found that multiple representations of different acoustic environments could coexist as sets of overlapping neural assemblies which could be associated with spatial locations by Hebbian learning. The model demonstrates the computational relevance of relative spike timing to extract spatial information about sources independently of the source signal.
Goodman DFM, 2010, Code Generation: A Strategy for Neural Network Simulators, NEUROINFORMATICS, Vol: 8, Pages: 183-196, ISSN: 1539-2791
Rossant C, Goodman DF, Platkiewicz J, et al., 2010, Automatic fitting of spiking neuron models to electrophysiological recordings, Frontiers in Neuroinformatics, Vol: 4, ISSN: 1662-5196
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
Goodman DF, Brette R, 2009, The brian simulator, Frontiers in Neuroscience, Vol: 3, Pages: 192-197, ISSN: 1662-4548
"Brian" is a simulator for spiking neural networks (http://www.briansimulator.org). The focus is on making the writing of simulation code as quick and easy as possible for the user, and on flexibility: new and non-standard models are no more difficult to define than standard ones. This allows scientists to spend more time on the details of their models, and less on their implementation. Neuron models are defined by writing differential equations in standard mathematical notation, facilitating scientific communication. Brian is written in the Python programming language, and uses vector-based computation to allow for efficient simulations. It is particularly useful for neuroscientific modelling at the systems level, and for teaching computational neuroscience.
Goodman D, Brette R, 2008, Brian: a simulator for spiking neural networks in python, Frontiers in Neuroinformatics, Vol: 2, ISSN: 1662-5196
"Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.