29 results found
Blundell I, Brette R, Cleland TA, et al., 2018, Code generation in computational neuroscience: A review of tools and techniques, Frontiers in Neuroinformatics, Vol: 12, ISSN: 1662-5196
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number
Stimberg M, Goodman D, Nowotny T, 2018, Brian2GeNN: a system for accelerating a large variety of spiking neural networks with graphics hardware
"Brian" is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNN is a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance grade graphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems so that users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technical knowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brian scripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators. From the user's perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown that using Brian2GeNN, typical models can run tens to hundreds of times faster than on CPU.
Hathway P, Goodman DFM, 2018, [Re] Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains, ReScience, Vol: 4, ISSN: 2430-3658
Zheng JX, Pawar S, Goodman DFM, 2018, Graph Drawing by Stochastic Gradient Descent
A popular method of force-directed graph drawing is multidimensional scalingusing graph-theoretic distances as input. We present an algorithm to minimizeits energy function, known as stress, by using stochastic gradient descent(SGD) to move a single pair of vertices at a time. Our results show that SGDcan reach lower stress levels faster and more consistently than majorization,without needing help from a good initialization. We then show how the uniqueproperties of SGD make it easier to produce constrained layouts than previousapproaches. We also show how SGD can be directly applied within the sparsestress approximation of Ortmann et al. , making the algorithm scalable up tolarge graphs.
Dietz M, Lestang J-H, Majdak P, et al., 2018, A framework for testing and comparing binaural models, HEARING RESEARCH, Vol: 360, Pages: 92-106, ISSN: 0378-5955
Goodman DFM, Winter IM, Leger AC, et al., 2018, Modelling firing regularity in the ventral cochlear nucleus: Mechanisms, and effects of stimulus level and synaptopathy, HEARING RESEARCH, Vol: 358, Pages: 98-110, ISSN: 0378-5955
Kim C, Steadman M, Lestang JH, et al., 2018, A VR-based mobile platform for training to non-individualized binaural 3D audio
© 2018 Audio Engineering Society. All Rights Reserved. Delivery of immersive 3D audio with arbitrarily-positioned sound sources over headphones often requires processing of individual source signals through a set of Head-Related Transfer Functions (HRTFs). The individual morphological differences and the impracticality of HRTF measurement make it difficult to deliver completely individualized 3D audio, and instead lead to the use of previously-measured non-individual sets of HRTFs. In this study, a VR-based mobile sound localization training prototype system is introduced which uses HRTF sets for audio. It consists of a mobile phone as a head-mounted device, a hand-held Bluetooth controller, and a network-enabled laptop with a USB audio interface and a pair of headphones. The virtual environment was developed on the mobile phone such that the user can listen-to/navigate-in an acoustically neutral scene and locate invisible target sound sources presented at random directions using non-individualized HRTFs in repetitive sessions. Various training paradigms can be designed with this system, with performance-related feedback provided according to the user’s localization accuracy, including visual indication of the target location, and some aspects of a typical first-person shooting game, such as enemies, scoring, and level advancement. An experiment was conducted using this system, in which 11 subjects went through multiple training sessions, using non-individualized HRTF sets. The localization performance evaluations showed reduction of overall localization angle error over repeated training sessions, reflecting lower front-back confusion rates.
Stimberg M, Goodman D, Brette R, et al., 2017, Modeling neuron-glia interactions with the Brian 2 simulator
Despite compelling evidence that glial cells could crucially regulate neural network activity, the vast majority of available neural simulators ignores the possible contribution of glia to neuronal physiology. Here, we show how to model glial physiology and neuron-glia interactions in the Brian 2 simulator. Brian 2 offers facilities to explicitly describe any model in mathematical terms with limited and simple simulator-specific syntax, automatically generating high-performance code from the user-provided descriptions. The flexibility of this approach allows us to model not only networks of neurons, but also individual glial cells, electrical coupling of glial cells, and the interaction between glial cells and synapses. We therefore conclude that Brian 2 provides an ideal platform to efficiently simulate glial physiology, and specifically, the influence of astrocytes on neural activity.
Goodman DFM, Stimberg M, Brette R, 2016, Brian 2.0 simulator
Brian is a simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible.
Kadir SN, Goodman DFM, Harris KD, 2014, High-Dimensional Cluster Analysis with the Masked EM Algorithm, NEURAL COMPUTATION, Vol: 26, Pages: 2379-2394, ISSN: 0899-7667
Stimberg M, Goodman DFM, Benichoux V, et al., 2014, Equation-oriented specification of neural models for simulations, FRONTIERS IN NEUROINFORMATICS, Vol: 8, ISSN: 1662-5196
Goodman DFM, Benichoux V, Brette R, 2013, Decoding neural responses to temporal cues for sound localization, ELIFE, Vol: 2, ISSN: 2050-084X
Rossant C, Fontaine B, Goodman DFM, 2013, Playdoh: A lightweight Python library for distributed computing and optimisation, JOURNAL OF COMPUTATIONAL SCIENCE, Vol: 4, Pages: 352-359, ISSN: 1877-7503
Brette R, Goodman DFM, 2012, Simulating spiking neural networks on GPU, NETWORK-COMPUTATION IN NEURAL SYSTEMS, Vol: 23, Pages: 167-182, ISSN: 0954-898X
Kremer Y, Leger J-F, Goodman D, et al., 2011, Late Emergence of the Vibrissa Direction Selectivity Map in the Rat Barrel Cortex, JOURNAL OF NEUROSCIENCE, Vol: 31, Pages: 10689-10700, ISSN: 0270-6474
Brette R, Goodman DFM, 2011, Vectorized Algorithms for Spiking Neural Network Simulation, NEURAL COMPUTATION, Vol: 23, Pages: 1503-1535, ISSN: 0899-7667
Fontaine B, Goodman DFM, Benichoux V, et al., 2011, Brian hears: online auditory processing using vectorization over channels., Front Neuroinform, Vol: 5
The human cochlea includes about 3000 inner hair cells which filter sounds at frequencies between 20 Hz and 20 kHz. This massively parallel frequency analysis is reflected in models of auditory processing, which are often based on banks of filters. However, existing implementations do not exploit this parallelism. Here we propose algorithms to simulate these models by vectorizing computation over frequency channels, which are implemented in "Brian Hears," a library for the spiking neural network simulator package "Brian." This approach allows us to use high-level programming languages such as Python, because with vectorized operations, the computational cost of interpretation represents a small fraction of the total cost. This makes it possible to define and simulate complex models in a simple way, while all previous implementations were model-specific. In addition, we show that these algorithms can be naturally parallelized using graphics processing units, yielding substantial speed improvements. We demonstrate these algorithms with several state-of-the-art cochlear models, and show that they compare favorably with existing, less flexible, implementations.
Goodman DFM, Brette R, 2010, Learning to localise sounds with spiking neural networks
To localise the source of a sound, we use location-specific properties of the signals received at the two ears caused by the asymmetric filtering of the original sound by our head and pinnae, the head-related transfer functions (HRTFs). These HRTFs change throughout an organism's lifetime, during development for example, and so the required neural circuitry cannot be entirely hardwired. Since HRTFs are not directly accessible from perceptual experience, they can only be inferred from filtered sounds. We present a spiking neural network model of sound localisation based on extracting location-specific synchrony patterns, and a simple supervised algorithm to learn the mapping between synchrony patterns and locations from a set of example sounds, with no previous knowledge of HRTFs. After learning, our model was able to accurately localise new sounds in both azimuth and elevation, including the difficult task of distinguishing sounds coming from the front and back.
Fletcher A, Goodman D, 2010, Quasiregular mappings of polynomial type in R<sup>2</sup>, Conformal Geometry and Dynamics, Vol: 14, Pages: 322-336
Complex dynamics deals with the iteration of holomorphic functions. As is well known, the first functions to be studied which gave non-trivial dynamics were quadratic polynomials, which produced beautiful computer generated pictures of Julia sets and the Mandelbrot set. In the same spirit, this article aims to study the dynamics of the simplest non-trivial quasiregular mappings. These are mappings in R2which are a composition of a quadratic polynomial and an affine stretch. © 2010 American Mathematical Society.
Goodman DFM, Brette R, 2010, Spike-Timing-Based Computation in Sound Localization, PLOS COMPUTATIONAL BIOLOGY, Vol: 6, ISSN: 1553-734X
Goodman DFM, 2010, Code Generation: A Strategy for Neural Network Simulators, NEUROINFORMATICS, Vol: 8, Pages: 183-196, ISSN: 1539-2791
Rossant C, Goodman DFM, Platkiewicz J, et al., 2010, Automatic fitting of spiking neuron models to electrophysiological recordings., Front Neuroinform, Vol: 4
Spiking models can accurately predict the spike trains produced by cortical neurons in response to somatically injected currents. Since the specific characteristics of the model depend on the neuron, a computational method is required to fit models to electrophysiological recordings. The fitting procedure can be very time consuming both in terms of computer simulations and in terms of code writing. We present algorithms to fit spiking models to electrophysiological data (time-varying input and spike trains) that can run in parallel on graphics processing units (GPUs). The model fitting library is interfaced with Brian, a neural network simulator in Python. If a GPU is present it uses just-in-time compilation to translate model equations into optimized code. Arbitrary models can then be defined at script level and run on the graphics card. This tool can be used to obtain empirically validated spiking models of neurons in various systems. We demonstrate its use on public data from the INCF Quantitative Single-Neuron Modeling 2009 competition by comparing the performance of a number of neuron spiking models.
Goodman DFM, Brette R, 2009, The Brian simulator, FRONTIERS IN NEUROSCIENCE, Vol: 3, Pages: 192-197, ISSN: 1662-453X
Goodman DFM, Stimberg M, Brette R, 2008, Brian simulator
Brian is a simulator for spiking neural networks available on almost all platforms. The motivation for this project is that a simulator should not only save the time of processors, but also the time of scientists.Brian is easy to learn and use, highly flexible and easily extensible. The Brian package itself and simulations using it are all written in the Python programming language, which is an easy, concise and highly developed language with many advanced features and development tools, excellent documentation and a large community of users providing support and extension packages.
Goodman D, Brette R, 2008, Brian: a simulator for spiking neural networks in python., Front Neuroinform, Vol: 2
"Brian" is a new simulator for spiking neural networks, written in Python (http://brian. di.ens.fr). It is an intuitive and highly flexible tool for rapidly developing new models, especially networks of single-compartment neurons. In addition to using standard types of neuron models, users can define models by writing arbitrary differential equations in ordinary mathematical notation. Python scientific libraries can also be used for defining models and analysing data. Vectorisation techniques allow efficient simulations despite the overheads of an interpreted language. Brian will be especially valuable for working on non-standard neuron models not easily covered by existing software, and as an alternative to using Matlab or C for simulations. With its easy and intuitive syntax, Brian is also very well suited for teaching computational neuroscience.
Goodman D, 2006, Spirals in the boundary of slices of quasi-fuchsian space, Conformal Geometry and Dynamics, Vol: 10, Pages: 136-158
We prove that the Bers and Maskit slices of the quasi-Fuchsian space of a once-punctured torus have a dense, uncountable set of points in their boundaries about which the boundary spirals infinitely. © 2006 American Mathematical Society.
Zheng JX, Pawar S, Goodman DFM, Comments on "Towards Unambiguous Edge Bundling: Investigating Confluent Drawings for Network Visualization"
Bach et al.  recently presented an algorithm for constructing generalconfluent drawings, by leveraging power graph decomposition to generate anauxiliary routing graph. We show that the resulting drawings are not strictlyguaranteed to be confluent due to potential corner cases that do not satisfythe original definition. We then reframe their work within the context ofprevious literature on using auxiliary graphs for bundling, which will help toguide future research in this area.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.