75 results found
Engel Alonso Martinez J, Goodman D, Picinali L, 2022, Assessing HRTF preprocessing methods for Ambisonics rendering through perceptual models, Acta Acustica -Peking-, Vol: 6, ISSN: 0371-0025
Binaural rendering of Ambisonics signals is a common way to reproduce spatial audio content. Processing Ambisonics signals at low spatial orders is desirable in order to reduce complexity, although it may degrade the perceived quality, in part due to the mismatch that occurs when a low-order Ambisonics signal is paired with a spatially dense head-related transfer function (HRTF). In order to alleviate this issue, the HRTF may be preprocessed so its spatial order is reduced. Several preprocessing methods have been proposed, but they have not been thoroughly compared yet. In this study, nine HRTF preprocessing methods were used to render anechoic binaural signals from Ambisonics representations of orders 1 to 44, and these were compared through perceptual hearing models in terms of localisation performance, externalisation and speech reception. This assessment was supported by numerical analyses of HRTF interpolation errors, interaural differences, perceptually-relevant spectral differences, and loudness stability. Models predicted that the binaural renderings’ accuracy increased with spatial order, as expected. A notable effect of the preprocessing method was observed: whereas all methods performed similarly at the highest spatial orders, some were considerably better at lower orders. A newly proposed method, BiMagLS, displayed the best performance overall and is recommended for the rendering of bilateral Ambisonics signals. The results, which were in line with previous literature, indirectly validate the perceptual models’ ability to predict listeners’ responses in a consistent and explicable manner.
Engel Alonso Martinez J, Goodman DFM, Picinali L, 2021, Improving Binaural Rendering with Bilateral Ambisonics and MagLS, DAGA 2021
The brain has a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that it substantially improved task performance. Learning was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments.Summary Neural heterogeneity is metabolically efficient for learning, and optimal parameter distribution matches experimental data.
Su Y, Chung Y, Goodman DFM, et al., 2021, Rate and Temporal Coding of Regular and Irregular Pulse Trains in Auditory Midbrain of Normal-Hearing and Cochlear-Implanted Rabbits, JARO-JOURNAL OF THE ASSOCIATION FOR RESEARCH IN OTOLARYNGOLOGY, Vol: 22, Pages: 319-347, ISSN: 1525-3961
<jats:p>Deep neural networks have had considerable success in neuroscience as models of the visual system, and recent work has suggested this may also extend to the auditory system. We tested the behaviour of a range of state of the art deep learning-based automatic speech recognition systems on a wide collection of manipulated sounds used in standard human psychometric experiments. While some systems showed qualitative agreement with humans in certain tests, in others all tested systems diverged markedly from humans. In particular, all systems used spectral invariance, temporal fine structure and speech periodicity differently from humans. We conclude that despite some promising results, none of the tested automatic speech recognition systems can yet act as a strong proxy for human speech recognition. However, we note that the more recent systems with better performance also tend to better match human results, suggesting that continued cross-fertilisation of ideas between human and automatic speech recognition may be fruitful. Our open source toolbox allows researchers to assess future automatic speech recognition systems or add additional psychoacoustic measures.</jats:p>
Achakulvisut T, Ruangrong T, Mineault P, et al., 2021, Towards democratizing and automating online conferences: lessons from the neuromatch conferences, Trends in Cognitive Sciences, Vol: 25, Pages: 265-268, ISSN: 1364-6613
Legacy conferences are costly and time consuming, and exclude scientists lacking various resources or abilities. During the 2020 pandemic, we created an online conference platform, Neuromatch Conferences (NMC), aimed at developing technological and cultural changes to make conferences more democratic, scalable, and accessible. We discuss the lessons we learned.
Zheng JX, Pawar S, Goodman DFM, 2021, Further towards unambiguous edge bundling: Investigating power-confluentdrawings for network visualization, IEEE Transactions on Visualization and Computer Graphics, Vol: 27, Pages: 2244-2249, ISSN: 1077-2626
Bach et al.  recently presented an algorithm for constructing confluentdrawings, by leveraging power graph decomposition to generate an auxiliaryrouting graph. We identify two problems with their method and offer a singlesolution to solve both. We also classify the exact type of confluent drawingsthat the algorithm can produce as 'power-confluent', and prove that it is asubclass of the previously studied 'strict confluent' drawing. A descriptionand source code of our implementation is also provided, which additionallyincludes an improved method for power graph construction.
Zenke F, Bohté SM, Clopath C, et al., 2021, Visualizing a joint future of neuroscience and neuromorphic engineering, Neuron, Vol: 109, Pages: 571-575, ISSN: 0896-6273
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
Perez-Nieves N, Goodman DFM, 2021, Sparse Spiking Gradient Descent, Advances in Neural Information Processing Systems, Vol: 15, Pages: 11795-11808, ISSN: 1049-5258
There is an increasing interest in emulating Spiking Neural Networks (SNNs) on neuromorphic computing devices due to their low energy consumption. Recent advances have allowed training SNNs to a point where they start to compete with traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at the same time being energy efficient when run on neuromorphic hardware. However, the process of training SNNs is still based on dense tensor operations originally developed for ANNs which do not leverage the spatiotemporally sparse nature of SNNs. We present here the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods while being significantly faster and more memory efficient. We show the effectiveness of our method on real datasets of varying complexity (Fashion-MNIST, NeuromophicMNIST and Spiking Heidelberg Digits) achieving a speedup in the backward pass of up to 150x, and 85% more memory efficient, without losing accuracy.
The brain has a hugely diverse, heterogeneous structure. Whether or not heterogeneity at the neural level plays a functional role remains unclear, and has been relatively little explored in models which are often highly homogeneous. We compared the performance of spiking neural networks trained to carry out tasks of real-world difficulty, with varying degrees of heterogeneity, and found that it substantially improved task performance. Learning was more stable and robust, particularly for tasks with a rich temporal structure. In addition, the distribution of neuronal parameters in the trained networks closely matches those observed experimentally. We suggest that the heterogeneity observed in the brain may be more than just the byproduct of noisy processes, but rather may serve an active and important role in allowing animals to learn in changing environments. <h4>Summary</h4> Neural heterogeneity is metabolically efficient for learning, and optimal parameter distribution matches experimental data.
Scientific conferences and meetings have an important role in research, but they also suffer from a number of disadvantages: in particular, they can have a massive carbon footprint, they are time-consuming, and the high costs involved in attending can exclude many potential participants. The COVID-19 pandemic has led to the cancellation of many conferences, forcing the scientific community to explore online alternatives. Here, we report on our experiences of organizing an online neuroscience conference, neuromatch, that attracted some 3000 participants and featured two days of talks, debates, panel discussions, and one-on-one meetings facilitated by a matching algorithm. By offering most of the benefits of traditional conferences, several clear advantages, and with fewer of the downsides, we feel that online conferences have the potential to replace many legacy conferences.
Stimberg M, Goodman D, Nowotny T, 2020, Brian2GeNN: accelerating spiking neural network simulations with graphics hardware, Scientific Reports, Vol: 10, Pages: 1-12, ISSN: 2045-2322
“Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNNis a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance gradegraphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems sothat users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technicalknowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brianscripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators.From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown thatusing Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.
Chu Y, Luk W, Goodman D, 2020, Learning Absolute Sound Source Localisation With Limited Supervisions
An accurate auditory space map can be learned from auditory experience, forexample during development or in response to altered auditory cues such as amodified pinna. We studied neural network models that learn to localise asingle sound source in the horizontal plane using binaural cues based onlimited supervisions. These supervisions can be unreliable or sparse in reallife. First, a simple model that has unreliable estimation of the sound sourcelocation is built, in order to simulate the unreliable auditory orientingresponse of newborns. It is used as a Teacher that acts as a source ofunreliable supervisions. Then we show that it is possible to learn a continuousauditory space map based only on noisy left or right feedbacks from theTeacher. Furthermore, reinforcement rewards from the environment are used as asource of sparse supervision. By combining the unreliable innate response andthe sparse reinforcement rewards, an accurate auditory space map, which is hardto be achieved by either one of these two kind of supervisions, can eventuallybe learned. Our results show that the auditory space mapping can be calibratedeven without explicit supervision. Moreover, this study implies a possibly moregeneral neural mechanism where multiple sub-modules can be coordinated tofacilitate each other's learning process under limited supervisions.
Steadman M, Kim C, Lestang J-H, et al., 2019, Short-term effects of sound localization training in virtual reality, Scientific Reports, Vol: 9, ISSN: 2045-2322
Head-related transfer functions (HRTFs) capture the direction-dependant way that sound interacts with the head and torso. In virtual audio systems, which aim to emulate these effects, non-individualized, generic HRTFs are typically used leading to an inaccurate perception of virtual sound location. Training has the potential to exploit the brain’s ability to adapt to these unfamiliar cues. In this study, three virtual sound localization training paradigms were evaluated; one provided simple visual positional confirmation of sound source location, a second introduced game design elements (“gamification”) and a final version additionally utilized head-tracking to provide listeners with experience of relative sound source motion (“active listening”). The results demonstrate a significant effect of training after a small number of short (12-minute) training sessions, which is retained across multiple days. Gamification alone had no significant effect on the efficacy of the training, but active listening resulted in a significantly greater improvements in localization accuracy. In general, improvements in virtual sound localization following training generalized to a second set of non-individualized HRTFs, although some HRTF-specific changes were observed in polar angle judgement for the active listening group. The implications of this on the putative mechanisms of the adaptation process are discussed.
Perez-Nieves N, Leung VCH, Dragotti PL, et al., 2019, Advantages of heterogeneity of parameters in spiking neural network training, 2019 Conference on Cognitive Computational Neuroscience, Publisher: Cognitive Computational Neuroscience
It is very common in studies of the learning capabilities of spiking neural networks (SNNs) to use homogeneous neural and synaptic parameters (time constants, thresholds, etc.). Even in studies in which these parameters are distributed heterogeneously, the advantages or disadvantages of the heterogeneity have rarely been studied in depth. By contrast, in the brain, neurons and synapses are highly diverse, leading naturally to the hypothesis that this heterogeneity may be advantageous for learning. Starting from two state-of-the-art methods for training spiking neural networks (Nicola & Clopath, 2017, Shrestha & Orchard 2018}, we found that adding parameter heterogeneity reduced errors when the network had to learn more complex patterns, increased robustness to hyperparameter mistuning, and reduced the number of training iterations required. We propose that neural heterogeneity may be an important principle for brains to learn robustly in real world environments with highly complex structure, and where task-specific hyperparameter tuning may be impossible. Consequently, heterogeneity may also be a good candidate design principle for artificial neural networks, to reduce the need for expensive hyperparameter tuning as well as for reducing training time.
Stimberg M, Brette R, Goodman DFM, 2019, Brian 2, an intuitive and efficient neural simulator, eLife, Vol: 8, ISSN: 2050-084X
Brian 2 allows scientists to simply and efficiently simulate spiking neural network models. These models can feature novel dynamical equations, their interactions with the environment, and experimental protocols. To preserve high performance when defining new models, most simulators offer two options: low-level programming or description languages. The first option requires expertise, is prone to errors, and is problematic for reproducibility. The second option cannot describe all aspects of a computational experiment, such as the potentially complex logic of a stimulation protocol. Brian addresses these issues using runtime code generation. Scientists write code with simple and concise high-level descriptions, and Brian transforms them into efficient low-level code that can run interleaved with their code. We illustrate this with several challenging examples: a plastic model of the pyloric network, a closed-loop sensorimotor model, a programmatic exploration of a neuron model, and an auditory model with real-time input.</jats:p>
Lestang J-H, Goodman DFM, 2019, General neural mechanisms can account for rising slope preference in localization of ambiguous sounds, Publisher: Cold Spring Harbor Laboratory
<jats:p>Sound localization in reverberant environments is a difficult task that human listeners perform effortlessly. Many neural mechanisms have been proposed to account for this behavior. Generally they rely on emphasizing localization information at the onset of the incoming sound while discarding localization cues that arrive later. We modelled several of these mechanisms using neural circuits commonly found in the brain and tested their performance in the context of experiments showing that, in the dominant frequency region for sound localisation, we have a preference for auditory cues arriving during the rising slope of the sound energy (Dietz et al., 2013). We found that both single cell mechanisms (onset and adaptation) and population mechanisms (lateral inhibition) were easily able to reproduce the results across a very wide range of parameter settings. This suggests that sound localization in reverberant environments may not require specialised mechanisms specific to perform that task, but could instead rely on common neural circuits in the brain. This would allow for the possibility of individual differences in learnt strategies or neuronal parameters. This research is fully reproducible, and we made our code available to edit and run online via interactive live notebooks.</jats:p>
Goodman D, Stimberg M, Brette R, 2019, Brian 2
Brian 2A clock-driven simulator for spiking neural networksBrian is a free, open source simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible.Documentation for Brian2 can be found at http://brian2.readthedocs.orgThe code is developed here: https://github.com/brian-team/brian2/Brian2 is released under the terms of the CeCILL 2.1 license.If you use Brian for your published research, we suggest that you cite one of our introductory articles:Goodman DFM and Brette R (2009). The Brian simulator. Front Neurosci doi: 10.3389/neuro.01.026.2009Stimberg M, Goodman DFM, Benichoux V, Brette R (2014). Equation-oriented specification of neural models for simulations. Frontiers Neuroinf, doi: 10.3389/fninf.2014.00006.
Engel Alonso-Martinez I, Goodman D, Picinali L, 2019, The Effect of Auditory Anchors on Sound Localization: A Preliminary Study, 2019 AES International Conference on Immersive and Interactive Audio
Blundell I, Brette R, Cleland TA, et al., 2018, Code generation in computational neuroscience: A review of tools and techniques, Frontiers in Neuroinformatics, Vol: 12, ISSN: 1662-5196
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number
Hathway P, Goodman DFM, 2018, [Re] Spike Timing Dependent Plasticity Finds the Start of Repeating Patterns in Continuous Spike Trains, ReScience, Vol: 4, ISSN: 2430-3658
Zheng JX, Pawar S, Goodman DFM, 2018, Graph Drawing by Stochastic Gradient Descent
A popular method of force-directed graph drawing is multidimensional scalingusing graph-theoretic distances as input. We present an algorithm to minimizeits energy function, known as stress, by using stochastic gradient descent(SGD) to move a single pair of vertices at a time. Our results show that SGDcan reach lower stress levels faster and more consistently than majorization,without needing help from a good initialization. We then show how the uniqueproperties of SGD make it easier to produce constrained layouts than previousapproaches. We also show how SGD can be directly applied within the sparsestress approximation of Ortmann et al. , making the algorithm scalable up tolarge graphs.
Kim C, Steadman M, Lestang JH, et al., 2018, A VR-based mobile platform for training to non-individualized binaural 3D audio, 144th Audio Engineering Society Convention 2018
© 2018 Audio Engineering Society. All Rights Reserved. Delivery of immersive 3D audio with arbitrarily-positioned sound sources over headphones often requires processing of individual source signals through a set of Head-Related Transfer Functions (HRTFs). The individual morphological differences and the impracticality of HRTF measurement make it difficult to deliver completely individualized 3D audio, and instead lead to the use of previously-measured non-individual sets of HRTFs. In this study, a VR-based mobile sound localization training prototype system is introduced which uses HRTF sets for audio. It consists of a mobile phone as a head-mounted device, a hand-held Bluetooth controller, and a network-enabled laptop with a USB audio interface and a pair of headphones. The virtual environment was developed on the mobile phone such that the user can listen-to/navigate-in an acoustically neutral scene and locate invisible target sound sources presented at random directions using non-individualized HRTFs in repetitive sessions. Various training paradigms can be designed with this system, with performance-related feedback provided according to the user’s localization accuracy, including visual indication of the target location, and some aspects of a typical first-person shooting game, such as enemies, scoring, and level advancement. An experiment was conducted using this system, in which 11 subjects went through multiple training sessions, using non-individualized HRTF sets. The localization performance evaluations showed reduction of overall localization angle error over repeated training sessions, reflecting lower front-back confusion rates.
Zheng JX, Pawar S, Goodman DFM, 2018, Confluent* Drawings by Hierarchical Clustering, 26th International Symposium on Graph Drawing and Network Visualization (GD), Publisher: SPRINGER INTERNATIONAL PUBLISHING AG, Pages: 640-642, ISSN: 0302-9743
Dietz M, Lestang J-H, Majdak P, et al., 2017, A framework for testing and comparing binaural models, Hearing Research, Vol: 360, Pages: 92-106, ISSN: 0378-5955
Auditory research has a rich history of combining experimental evidence with computational simulations of auditory processing in order to deepen our theoretical understanding of how sound is processed in the ears and in the brain. Despite significant progress in the amount of detail and breadth covered by auditory models, for many components of the auditory pathway there are still different model approaches that are often not equivalent but rather in conflict with each other. Similarly, some experimental studies yield conflicting results which has led to controversies. This can be best resolved by a systematic comparison of multiple experimental data sets and model approaches. Binaural processing is a prominent example of how the development of quantitative theories can advance our understanding of the phenomena, but there remain several unresolved questions for which competing model approaches exist. This article discusses a number of current unresolved or disputed issues in binaural modelling, as well as some of the significant challenges in comparing binaural models with each other and with the experimental data. We introduce an auditory model framework, which we believe can become a useful infrastructure for resolving some of the current controversies. It operates models over the same paradigms that are used experimentally. The core of the proposed framework is an interface that connects three components irrespective of their underlying programming language: The experiment software, an auditory pathway model, and task-dependent decision stages called artificial observers that provide the same output format as the test subject.
Stimberg M, Goodman DFM, Brette R, et al., 2017, Modeling neuron–glia interactions with the <i>Brian 2</i> simulator, Publisher: Cold Spring Harbor Laboratory
<jats:title>Abstract</jats:title><jats:p>Despite compelling evidence that glial cells could crucially regulate neural network activity, the vast majority of available neural simulators ignores the possible contribution of glia to neuronal physiology. Here, we show how to model glial physiology and neuron-glia interactions in the <jats:italic>Brian 2</jats:italic> simulator. <jats:italic>Brian 2</jats:italic> offers facilities to explicitly describe any model in mathematical terms with limited and simple simulator-specific syntax, automatically generating high-performance code from the user-provided descriptions. The flexibility of this approach allows us to model not only networks of neurons, but also individual glial cells, electrical coupling of glial cells, and the interaction between glial cells and synapses. We therefore conclude that <jats:italic>Brian 2</jats:italic> provides an ideal platform to efficiently simulate glial physiology, and specifically, the influence of astrocytes on neural activity.</jats:p>
Goodman DFM, Winter IM, Léger AC, et al., 2017, Modelling firing regularity in the ventral cochlear nucleus: Mechanisms, and effects of stimulus level and synaptopathy, Hearing Research, Vol: 358, Pages: 98-110, ISSN: 0378-5955
The auditory system processes temporal information at multiple scales, and disruptions to this temporal processing may lead to deficits in auditory tasks such as detecting and discriminating sounds in a noisy environment. Here, a modelling approach is used to study the temporal regularity of firing by chopper cells in the ventral cochlear nucleus, in both the normal and impaired auditory system. Chopper cells, which have a strikingly regular firing response, divide into two classes, sustained and transient, based on the time course of this regularity. Several hypotheses have been proposed to explain the behaviour of chopper cells, and the difference between sustained and transient cells in particular. However, there is no conclusive evidence so far. Here, a reduced mathematical model is developed and used to compare and test a wide range of hypotheses with a limited number of parameters. Simulation results show a continuum of cell types and behaviours: chopper-like behaviour arises for a wide range of parameters, suggesting that multiple mechanisms may underlie this behaviour. The model accounts for systematic trends in regularity as a function of stimulus level that have previously only been reported anecdotally. Finally, the model is used to predict the effects of a reduction in the number of auditory nerve fibres (deafferentation due to, for example, cochlear synaptopathy). An interactive version of this paper in which all the model parameters can be changed is available online.
Goodman DFM, Stimberg M, Brette R, 2016, Brian 2.0 simulator
Brian is a simulator for spiking neural networks. It is written in the Python programming language and is available on almost all platforms. We believe that a simulator should not only save the time of processors, but also the time of scientists. Brian is therefore designed to be easy to learn and use, highly flexible and easily extensible.
Developments in microfabrication technology have enabled the production of neural electrode arrays with hundreds of closely spaced recording sites, and electrodes with thousands of sites are under development. These probes in principle allow the simultaneous recording of very large numbers of neurons. However, use of this technology requires the development of techniques for decoding the spike times of the recorded neurons from the raw data captured from the probes. Here we present a set of tools to solve this problem, implemented in a suite of practical, user-friendly, open-source software. We validate these methods on data from the cortex, hippocampus and thalamus of rat, mouse, macaque and marmoset, demonstrating error rates as low as 5%.
Kadir SN, Goodman DFM, Harris KD, 2014, High-dimensional cluster analysis with the masked EM algorithm, Neural Computation, Vol: 26, Pages: 2379-2394, ISSN: 0899-7667
Cluster analysis faces two problems in high dimensions: the "curse of dimensionality" that can lead to overfitting and poor generalization performance and the sheer time taken for conventional algorithms to process large amounts of high-dimensional data. We describe a solution to these problems, designed for the application of spike sorting for nextgeneration, high-channel-count neural probes. In this problem, only a small subset of features provides information about the cluster membership of any one data vector, but this informative feature subset is not the same for all data points, rendering classical feature selection ineffective.We introduce a "masked EM" algorithm that allows accurate and time-efficient clustering of up to millions of points in thousands of dimensions. We demonstrate its applicability to synthetic data and to real-world high-channel-count spike sorting data.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.