52 results found
Sadeh S, Clopath C, 2020, Inhibitory stabilization and cortical computation, Nature Reviews Neuroscience, ISSN: 1471-003X
Sadeh S, Clopath C, 2020, Theory of neuronal perturbome in cortical networks., Proceedings of the National Academy of Sciences of USA, Vol: 117, Pages: 26966-26976, ISSN: 0027-8424
To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.
Udakis M, Pedrosa V, Chamberlain SEL, et al., 2020, Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output, Nature Communications, Vol: 11, Pages: 4395-4395, ISSN: 2041-1723
The formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.
Pedrosa V, Clopath C, 2020, The interplay between somatic and dendritic inhibition promotes the emergence and stabilization of place fields, PLoS Computational Biology, Vol: 16, ISSN: 1553-734X
During the exploration of novel environments, place fields are rapidly formed in hippocampal CA1 neurons. Place cell firing rate increases in early stages of exploration of novel environments but returns to baseline levels in familiar environments. Although similar in amplitude and width, place fields in familiar environments are more stable than in novel environments. We propose a computational model of the hippocampal CA1 network, which describes the formation, dynamics and stabilization of place fields. We show that although somatic disinhibition is sufficient to form place fields, dendritic inhibition along with synaptic plasticity is necessary for place field stabilization. Our model suggests that place cell stability can be attributed to strong excitatory synaptic weights and strong dendritic inhibition. We show that the interplay between somatic and dendritic inhibition balances the increased excitatory weights, such that place cells return to their baseline firing rate after exploration. Our model suggests that different types of interneurons are essential to unravel the mechanisms underlying place field plasticity. Finally, we predict that artificially induced dendritic events can shift place fields even after place field stabilization.
Tomasev N, Cornebise J, Hutter F, et al., 2020, AI for social good: unlocking the opportunity for positive impact, NATURE COMMUNICATIONS, Vol: 11, ISSN: 2041-1723
Clopath C, Sweeney YA, 2020, Population coupling predicts the plasticity of stimulus responses in cortical circuits, eLife, Vol: 9, ISSN: 2050-084X
Some neurons have stimulus responses that are stable over days, whereas other neurons have highly plastic stimulus responses. Using a recurrent network model, we explore whether this could be due to an underlying diversity in their synaptic plasticity. We find that, in a network with diverse learning rates, neurons with fast rates are more coupled to population activity than neurons with slow rates. This plasticity-coupling link predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling. We substantiate this prediction using recordings from the Allen Brain Observatory, finding that a neuron’s population coupling is correlated with the plasticity of its orientation preference. Simulations of a simple perceptual learning task suggest a particular functional architecture: a stable ‘backbone’ of stimulus representation formed by neurons with low population coupling, on top of which lies a flexible substrate of neurons with high population coupling.
Sadeh S, Clopath C, 2020, Patterned perturbation of inhibition can reveal the dynamical structure of neural processing, eLife, Vol: 9, ISSN: 2050-084X
Perturbation of neuronal activity is key to understanding the brain's functional properties, however, intervention studies typically perturb neurons in a nonspecific manner. Recent optogenetics techniques have enabled patterned perturbations, in which specific patterns of activity can be invoked in identified target neurons to reveal more specific cortical function. Here, we argue that patterned perturbation of neurons is in fact necessary to reveal the specific dynamics of inhibitory stabilization, emerging in cortical networks with strong excitatory and inhibitory functional subnetworks, as recently reported in mouse visual cortex. We propose a specific perturbative signature of these networks and investigate how this can be measured under different experimental conditions. Functionally, rapid spontaneous transitions between selective ensembles of neurons emerge in such networks, consistent with experimental results. Our study outlines the dynamical and functional properties of feature-specific inhibitory-stabilized networks, and suggests experimental protocols that can be used to detect them in the intact cortex.
Maes A, Barahona M, Clopath C, 2020, Learning spatiotemporal signals using a recurrent spiking network that discretizes time, PLoS Computational Biology, Vol: 16, Pages: 1-26, ISSN: 1553-734X
Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neural substrate may be used by the brain to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory biophysical neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.
Ebner C, Clopath C, Jedlicka P, et al., 2019, Unifying Long-Term Plasticity Rules for Excitatory Synapses by Modeling Dendrites of Cortical Pyramidal Neurons, CELL REPORTS, Vol: 29, Pages: 4295-+, ISSN: 2211-1247
Knopfel T, Sweeney Y, Radulescu CI, et al., 2019, Audio-visual experience strengthens multisensory assemblies in adult mouse visual cortex, Nature Communications, Vol: 10, ISSN: 2041-1723
We experience the world through multiple senses simultaneously. To better understand mechanisms of multisensory processing we ask whether inputs from two senses (auditory and visual) can interact and drive plasticity in neural-circuits of the primary visual cortex (V1). Using genetically-encoded voltage and calcium indicators, we find coincident audio-visual experience modifies both the supra and subthreshold response properties of neurons in L2/3 of mouse V1. Specifically, we find that after audio-visual pairing, a subset of multimodal neurons develops enhanced auditory responses to the paired auditory stimulus. This cross-modal plasticity persists over days and is reflected in the strengthening of small functional networks of L2/3 neurons. We find V1 processes coincident auditory and visual events by strengthening functional associations between feature specific assemblies of multimodal neurons during bouts of sensory driven co-activity, leaving a trace of multisensory experience in the cortical network.
Wilmes KA, Clopath C, 2019, Inhibitory microcircuits for top-down plasticity of sensory representations, Nature Communications, Vol: 10, ISSN: 2041-1723
Rewards influence plasticity of early sensory representations. The underlying changes in cir-cuitry are however unclear. Recent experimental findings suggest that inhibitory circuits regu-late learning. In addition, inhibitory neurons are highly modulated by diverse long-range inputs,including reward signals. We, therefore, hypothesise that inhibitory plasticity plays a major rolein adjusting stimulus representations. We investigate how top-down modulation by rewards in-teracts with local plasticity to induce long-lasting changes in circuitry. Using a computationalmodel of layer 2/3 primary visual cortex, we demonstrate how interneuron circuits can storeinformation about rewarded stimuli to instruct long-term changes in excitatory connectivity inthe absence of further reward. In our model, stimulus-tuned somatostatin-positive interneuronsdevelop strong connections to parvalbumin-positive interneurons during reward such that theyselectively disinhibit the pyramidal layer henceforth. This triggers excitatory plasticity, leadingto increased stimulus representation. We make specific testable predictions and show that thistwo-stage model allows for translation invariance of the learned representation.
Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In the case of artificial neural networks, the three components specified by design are the objective functions, the learning rules, and architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.
Udakis M, Pedrosa V, Chamberlain SEL, et al., 2019, Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output, Publisher: Cold Spring Harbor Laboratory
<jats:title>Summary</jats:title><jats:p>The formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.</jats:p>
Nicola W, Clopath C, 2019, A diversity of interneurons and Hebbian plasticity facilitate rapid compressible learning in the hippocampus, Nature Neuroscience, Vol: 22, Pages: 1168-1181, ISSN: 1097-6256
The hippocampus is able to rapidly learn incoming information, even if that information is only observed once. Further, this information can be replayed in a compressed format in either forward or reverse modes during Sharp Wave Ripples (SPW-Rs). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily interneuron networks can: 1) generate internal theta sequences to bind externally elicited spikes in the presence of inhibition from Medial Septum, 2) compress learned spike sequences in the form of a SPW-R when septal inhibition is removed, 3) generate and refine high-frequency assemblies during SPW-R mediated compression, and 4) regulate the inter-SPW-interval timing between SPW-Rs in ripple clusters. From the fast timescale of neurons to the slow timescale of behaviours, interneuron networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences.
Bono J, Clopath C, 2019, Synaptic plasticity onto inhibitory neurons as a mechanism for ocular dominance plasticity, PLOS COMPUTATIONAL BIOLOGY, Vol: 15
The cerebellum aids the learning of fast, coordinated movements. According tocurrent consensus, erroneously active parallel fibre synapses are depressed by complex spikessignalling movement errors. However, this theory cannot solve the credit assignment problem ofprocessing a global movement evaluation into multiple cell-specific error signals. We identify apossible implementation of an algorithm solving this problem, whereby spontaneous complexspikes perturb ongoing movements, create eligibility traces and signal error changes guidingplasticity. Error changes are extracted by adaptively cancelling the average error. This framework,stochastic gradient descent with estimated global errors (SGDEGE), predicts synaptic plasticityrules that apparently contradict the current consensus but were supported by plasticityexperiments in slices from mice under conditions designed to be physiological, highlighting thesensitivity of plasticity studies to experimental conditions. We analyse the algorithm’s convergenceand capacity. Finally, we suggest SGDEGE may also operate in the basal ganglia.
Low-dimensional yet rich dynamics often emerge in the brain. Examples include oscillations and chaotic dynamics during sleep, epilepsy, and voluntary movement. However, a general mechanism for the emergence of low dimensional dynamics remains elusive. Here, we consider Wilson-Cowan networks and demonstrate through numerical and analytical work that homeostatic regulation of the network firing rates can paradoxically lead to a rich dynamical repertoire. The dynamics include mixed-mode oscillations, mixed-mode chaos, and chaotic synchronization when the homeostatic plasticity operates on a moderately slower time scale than the firing rates. This is true for a single recurrently coupled node, pairs of reciprocally coupled nodes without self-coupling, and networks coupled through experimentally determined weights derived from functional magnetic resonance imaging data. In all cases, the stability of the homeostatic set point is analytically determined or approximated. The dynamics at the network level are directly determined by the behavior of a single node system through synchronization in both oscillatory and non-oscillatory states. Our results demonstrate that rich dynamics can be preserved under homeostatic regulation or even be caused by homeostatic regulation.When recordings from the brain are analyzed, rich dynamics such as oscillations or low-dimensional chaos are often present. However, a general mechanism for how these dynamics emerge remains unresolved. Here, we explore the potential that these dynamics are caused by an interaction between synaptic homeostasis, and the connectivity between distinct populations of neurons. Using both analytical and numerical approaches, we analyze how data derived connection weights interact with inhibitory synaptic homeostasis to create rich dynamics such chaos and oscillations operating on multiple time scales. We demonstrate that these rich dynamical states are present in simple systems such as single population of neurons
Zannone S, Brzosko Z, Paulsen O, et al., 2018, Acetylcholine-modulated plasticity in reward-driven navigation: a computational study, Scientific Reports, Vol: 8, ISSN: 2045-2322
Neuromodulation plays a fundamental role in the acquisition of new behaviours. In previous experimental work, we showed that acetylcholine biases hippocampal synaptic plasticity towards depression, and the subsequent application of dopamine can retroactively convert depression into potentiation. We also demonstrated that incorporating this sequentially neuromodulated Spike-Timing-Dependent Plasticity (STDP) rule in a network model of navigation yields effective learning of changing reward locations. Here, we employ computational modelling to further characterize the effects of cholinergic depression on behaviour. We find that acetylcholine, by allowing learning from negative outcomes, enhances exploration over the action space. We show that this results in a variety of effects, depending on the structure of the model, the environment and the task. Interestingly, sequentially neuromodulated STDP also yields flexible learning, surpassing the performance of other reward-modulated plasticity rules.
Sollini J, Chapuis GA, Clopath C, et al., 2018, ON-OFF receptive fields in auditory cortex diverge during development and contribute to directional sweep selectivity, Nature Communications, Vol: 9, ISSN: 2041-1723
Neurons in the auditory cortex exhibit distinct frequency tuning to the onset and offset of sounds, but the cause and significance of ON and OFF receptive field (RF) organisation are not understood. Here we demonstrate that distinct ON and OFF frequency tuning is largely absent in immature mouse auditory cortex and is thus a consequence of cortical development. Simulations using a novel implementation of a standard Hebbian plasticity model show that the natural alternation of sound onset and offset is sufficient for the formation of non-overlapping adjacent ON and OFF RFs in cortical neurons. Our model predicts that ON/OFF RF arrangement contributes towards direction selectivity to frequency-modulated tone sweeps, which we confirm by neuronal recordings. These data reveal that a simple and universally accepted learning rule can explain the organisation of ON and OFF RFs and direction selectivity in the developing auditory cortex.
González Rueda A, Pedrosa V, Feord R, et al., 2018, Activity dependent downscaling of subthreshold synaptic inputs during slow wave sleep like activity in vivo, Neuron, Vol: 97, Pages: 1244-1252.e5, ISSN: 0896-6273
Activity-dependent synaptic plasticity is critical for cortical circuit refinement. The synaptic homeostasis hypothesis suggests that synaptic connections are strengthened during wake and downscaled during sleep; however, it is not obvious how the same plasticity rules could explain both outcomes. Using whole-cell recordings and optogenetic stimulation of presynaptic input in urethane-anesthetized mice, which exhibit slow-wave-sleep (SWS)-like activity, we show that synaptic plasticity rules are gated by cortical dynamics in vivo. While Down states support conventional spike timing-dependent plasticity, Up states are biased toward depression such that presynaptic stimulation alone leads to synaptic depression, while connections contributing to postsynaptic spiking are protected against this synaptic weakening. We find that this novel activity-dependent and input-specific downscaling mechanism has two important computational advantages: (1) improved signal-to-noise ratio, and (2) preservation of previously stored information. Thus, these synaptic plasticity rules provide an attractive mechanism for SWS-related synaptic downscaling and circuit refinement.
Pernelle G, Nicola W, Clopath C, 2018, Gap junction plasticity as a mechanism to regulate network-wide oscillations, PLoS Computational Biology, Vol: 14, ISSN: 1553-734X
Cortical oscillations are thought to be involved in many cognitive functions and processes. Several mechanisms have been proposed to regulate oscillations. One prominent but understudied mechanism is gap junction coupling. Gap junctions are ubiquitous in cortex between GABAergic interneurons. Moreover, recent experiments indicate their strength can be modified in an activity-dependent manner, similar to chemical synapses. We hypothesized that activity-dependent gap junction plasticity acts as a mechanism to regulate oscillations in the cortex. We developed a computational model of gap junction plasticity in a recurrent cortical network based on recent experimental findings. We showed that gap junction plasticity can serve as a homeostatic mechanism for oscillations by maintaining a tight balance between two network states: asynchronous irregular activity and synchronized oscillations. This homeostatic mechanism allows for robust communication between neuronal assemblies through two different mechanisms: transient oscillations and frequency modulation. This implies a direct functional role for gap junction plasticity in information transmission in cortex.
Sammons RP, Clopath C, Barnes SJ, 2018, Size-dependent axonal bouton dynamics following visual deprivation in vivo, Cell Reports, Vol: 22, Pages: 576-584, ISSN: 2211-1247
Persistent synapses are thought to underpin the storage of sensory experience. Yet, little is known about their structural plasticity in vivo. We investigated how persistent presynaptic structures respond to the loss of primary sensory input. Using in vivo two-photon (2-P) imaging we measured fluctuations in the size of excitatory axonal boutons in L2/3 of adult mouse visual cortex after monocular enucleation. The average size of boutons did not change after deprivation, but the range of bouton sizes was reduced. Large boutons decreased and small boutons increased. Reduced bouton variance was accompanied by a reduced range of correlated calcium mediated neural activity in L2/3 of awake animals. Network simulations predicted that size-dependent plasticity may promote conditions of greater bidirectional plasticity. These predictions were supported by electrophysiological measures of short and long-term plasticity. We propose size-dependent dynamics facilitate cortical reorganization by maximising the potential for bidirectional plasticity.
Kaplanis C, Shanahan M, Clopath C, 2018, Continual reinforcement learning with complex synapses, Pages: 3893-3902
© CURRAN-CONFERENCE. All rights reserved. Unlike humans, who are capable of continual learning over their lifetimes, artificial neural networks have long been known to suffer from a phenomenon known as catastrophic forgetting, whereby new learning can lead to abrupt erasure of previously acquired knowledge. Whereas in a neural network the parameters are typically modelled as scalar values, an individual synapse in the brain comprises a complex network of interacting biochemical components that evolve at different timescales. In this paper, we show that by equipping tabular and deep reinforcement learning agents with a synaptic model that incorporates this biological complexity (Benna & Fusi, 2016), catastrophic forgetting can be mitigated at multiple timescales. In particular, we find that as well as enabling continual learning across sequential training of two simple tasks, it can also be used to overcome within-task forgetting by reducing the need for an experience replay database.
Nicola W, Clopath C, 2017, Supervised Learning in Spiking Neural Networks with FORCE Training, Nature Communications, Vol: 8, ISSN: 2041-1723
Populations of neurons display an extraordinary diversity in the behaviors they affect and display. Machine learning techniques have recently emerged that allow us to create networks of model neurons that display behaviors of similar complexity. Here we demonstrate the direct applicability of one such technique, the FORCE method, to spiking neural networks. We train these networks to mimic dynamical systems, classify inputs, and store discrete sequences that correspond to the notes of a song. Finally, we use FORCE training to create two biologically motivated model circuits. One is inspired by the zebra finch and successfully reproduces songbird singing. The second network is motivated by the hippocampus and is trained to store and replay a movie scene. FORCE trained networks reproduce behaviors comparable in complexity to their inspired circuits and yield information not easily obtainable with other techniques, such as behavioral responses to pharmacological manipulations and spike timing statistics.
Barnes, Franzoni E, Jacobsen RI, et al., 2017, Deprivation-induced homeostatic spine scaling in vivo is localized to dendritic branches that have undergone recent spine loss, Neuron, Vol: 96, Pages: 871-882.e5, ISSN: 0896-6273
Synaptic scaling is a key homeostatic plasticity mechanism and is thought to be involved in the regulation of cortical activity levels. Here we investigated the spatial scale of homeostatic changes in spine size following sensory deprivation in a subset of inhibitory (layer 2/3 GAD65-positive) and excitatory (layer 5 Thy1-positive) neurons in mouse visual cortex. Using repeated in vivo two-photon imaging, we find that increases in spine size are tumor necrosis factor alpha (TNF-α) dependent and thus are likely associated with synaptic scaling. Rather than occurring at all spines, the observed increases in spine size are spatially localized to a subset of dendritic branches and are correlated with the degree of recent local spine loss within that branch. Using simulations, we show that such a compartmentalized form of synaptic scaling has computational benefits over cell-wide scaling for information processing within the cell.
Cayco-Gajic NA, Clopath C, Silver RA, 2017, Sparse synaptic connectivity is required for decorrelation and pattern separation in feedforward networks, Nature Communications, Vol: 8, ISSN: 2041-1723
Pattern separation is a fundamental function of the brain. The divergent feedforward networks thought to underlie this computation are widespread, yet exhibit remarkably similar sparse synaptic connectivity. Marr-Albus theory postulates that such networks separate overlapping activity patterns by mapping them onto larger numbers of sparsely active neurons. But spatial correlations in synaptic input and those introduced by network connectivity are likely to compromise performance. To investigate the structural and functional determinants of pattern separation we built models of the cerebellar input layer with spatially correlated input patterns, and systematically varied their synaptic connectivity. Performance was quantified by the learning speed of a classifier trained on either the input or output patterns. Our results show that sparse synaptic connectivity is essential for separating spatially correlated input patterns over a wide range of network activity, and that expansion and correlations, rather than sparse activity, are the major determinants of pattern separation.
Bono J, Clopath C, 2017, Modelling somatic and dendritic spike mediated plasticity at the single neuron and network level, Nature Communications, Vol: 8, ISSN: 2041-1723
Synaptic plasticity is thought to be the principal neuronal mechanism underlying learning. Models of plastic networks typically combine point neurons with spike-timing-dependent plasticity (STDP) as the learning rule. However, a point neuron does not capture the local non-linear processing of synaptic inputs allowed for by dendrites. Furthermore, experimental evidence suggests that STDP is not the only learning rule available to neurons. By implementing biophysically realistic neuron models, we study how dendrites enable multiple synaptic plasticity mechanisms to coexist in a single cell. In these models, we compare the conditions for STDP and for synaptic strengthening by local dendritic spikes. We also explore how the connectivity between two cells is affected by these plasticity rules and by different synaptic distributions. Finally, we show that how memory retention during associative learning can be prolonged in networks of neurons by including dendrites.
Bono J, Wilmes K, Clopath C, 2017, Modelling plasticity in dendrites: from single cells to networks, Current Opinion in Neurobiology, Vol: 46, Pages: 136-141, ISSN: 0959-4388
One of the key questions in neuroscience is how our brain self-organises to efficiently process information. To answer this question, we need to understand the underlying mechanisms of plasticity and their role in shaping synaptic connectivity. Theoretical neuroscience typically investigates plasticity on the level of neural networks. Neural network models often consist of point neurons, completely neglecting neuronal morphology for reasons of simplicity. However, during the past decades it became increasingly clear that inputs are locally processed in the dendrites before they reach the cell body. Dendritic properties enable local interactions between synapses and location-dependent modulations of inputs, rendering the position of synapses on dendrites highly important. These insights changed our view of neurons, such that we now think of them as small networks of nearly independent subunits instead of a simple point. Here, we propose that understanding how the brain processes information strongly requires that we consider the following properties: which plasticity mechanisms are present in the dendrites and how do they enable the self-organisation of synapses across the dendritic tree for efficient information processing? Ultimately, dendritic plasticity mechanisms can be studied in networks of neurons with dendrites, possibly uncovering unknown mechanisms that shape the connectivity in our brains.
Bass C, Helkkula P, De Paola V, et al., 2017, Detection of axonal synapses in 3D two-photon images, PLoS One, Vol: 12, Pages: 1-18, ISSN: 1932-6203
Studies of structural plasticity in the brain often require the detection and analysis of axonal synapses (boutons). To date, bouton detection has been largely manual or semi-automated, relying on a step that traces the axons before detection the boutons. If tracing the axon fails, the accuracy of bouton detection is compromised. In this paper, we propose a new algorithm that does not require tracing the axon to detect axonal boutons in 3D two-photon images taken from the mouse cortex. To find the most appropriate techniques for this task, we compared several well-known algorithms for interest point detection and feature descriptor generation. The final algorithm proposed has the following main steps: (1) a Laplacian of Gaussian (LoG) based feature enhancement module to accentuate the appearance of boutons; (2) a Speeded Up Robust Features (SURF) interest point detector to find candidate locations for feature extraction; (3) non-maximum suppression to eliminate candidates that were detected more than once in the same local region; (4) generation of feature descriptors based on Gabor filters; (5) a Support Vector Machine (SVM) classifier, trained on features from labelled data, and was used to distinguish between bouton and non-bouton candidates. We found that our method achieved a Recall of 95%, Precision of 76%, and F1 score of 84% within a new dataset that we make available for accessing bouton detection. On average, Recall and F1 score were significantly better than the current state-of-the-art method, while Precision was not significantly different. In conclusion, in this article we demonstrate that our approach, which is independent of axon tracing, can detect boutons to a high level of accuracy, and improves on the detection performance of existing approaches. The data and code (with an easy to use GUI) used in this article are available from open source repositories.
Hellyer P, Clopath C, Kehagia A, et al., 2017, From homeostasis to behavior: Balanced activity in an exploration of embodied dynamic environmental-neural interaction, PLoS Computational Biology, Vol: 13, ISSN: 1553-734X
In recent years, there have been many computational simulations of spontaneous neural dynamics. Here, we describe a simple model of spontaneous neural dynamics that controls an agent moving in a simple virtual environment. These dynamics generate interesting brain-environment feedback interactions that rapidly destabilize neural and behavioral dynamics demonstrating the need for homeostatic mechanisms. We investigate roles for homeostatic plasticity both locally (local inhibition adjusting to balance excitatory input) as well as more globally (regional “task negative” activity that compensates for “task positive”, sensory input in another region) balancing neural activity and leading to more stable behavior (trajectories through the environment). Our results suggest complementary functional roles for both local and macroscale mechanisms in maintaining neural and behavioral dynamics and a novel functional role for macroscopic “task-negative” patterns of activity (e.g., the default mode network).
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.