79 results found
Maes A, Barahona M, Clopath C, 2023, Long- and short-term history effects in a spiking network model of statistical learning, Scientific Reports, Vol: 13, Pages: 1-14, ISSN: 2045-2322
The statistical structure of the environment is often important when making decisions. There are multiple theories of howthe brain represents statistical structure. One such theory states that neural activity spontaneously samples from probabilitydistributions. In other words, the network spends more time in states which encode high-probability stimuli. Starting fromthe neural assembly, increasingly thought of to be the building block for computation in the brain, we focus on how arbitraryprior knowledge about the external world can both be learned and spontaneously recollected. We present a model basedupon learning the inverse of the cumulative distribution function. Learning is entirely unsupervised using biophysical neuronsand biologically plausible learning rules. We show how this prior knowledge can then be accessed to compute expectationsand signal surprise in downstream networks. Sensory history effects emerge from the model as a consequence of ongoinglearning.
Gallinaro J, Scholl B, Clopath C, 2023, Synaptic weights that correlate with presynaptic selectivity increase decoding performance, PLoS Computational Biology, Vol: 19, Pages: 1-18, ISSN: 1553-734X
The activity of neurons in the visual cortex is often characterized by tuning curves, which are thought to be shaped by Hebbian plasticity during development and sensory experience. This leads to the prediction that neural circuits should be organized such that neurons with similar functional preference are connected with stronger weights. In support of this idea, previous experimental and theoretical work have provided evidence for a model of the visual cortex characterized by such functional subnetworks. A recent experimental study, however, have found that the postsynaptic preferred stimulus was defined by the total number of spines activated by a given stimulus and independent of their individual strength. While this result might seem to contradict previous literature, there are many factors that define how a given synaptic input influences postsynaptic selectivity. Here, we designed a computational model in which postsynaptic functional preference is defined by the number of inputs activated by a given stimulus. Using a plasticity rule where synaptic weights tend to correlate with presynaptic selectivity, and is independent of functional-similarity between pre- and postsynaptic activity, we find that this model can be used to decode presented stimuli in a manner that is comparable to maximum likelihood inference.
Rigby M, Grillo FW, Compans B, et al., 2023, Multi-synaptic boutons are a feature of CA1 hippocampal connections in the stratum oriens., Cell Rep, Vol: 42
Excitatory synapses are typically described as single synaptic boutons (SSBs), where one presynaptic bouton contacts a single postsynaptic spine. Using serial section block-face scanning electron microscopy, we found that this textbook definition of the synapse does not fully apply to the CA1 region of the hippocampus. Roughly half of all excitatory synapses in the stratum oriens involved multi-synaptic boutons (MSBs), where a single presynaptic bouton containing multiple active zones contacted many postsynaptic spines (from 2 to 7) on the basal dendrites of different cells. The fraction of MSBs increased during development (from postnatal day 22 [P22] to P100) and decreased with distance from the soma. Curiously, synaptic properties such as active zone (AZ) or postsynaptic density (PSD) size exhibited less within-MSB variation when compared with neighboring SSBs, features that were confirmed by super-resolution light microscopy. Computer simulations suggest that these properties favor synchronous activity in CA1 networks.
Wilmes K, Clopath C, 2023, Dendrites help mitigate the plasticity-stability dilemma, Scientific Reports, Vol: 13, Pages: 1-15, ISSN: 2045-2322
With Hebbian learning ‘who fires together wires together’, well-known problems arise. Hebbian plasticity can cause unstable network dynamics and overwrite stored memories. Because the known homeostatic plasticity mechanisms tend to be too slow to combat unstable dynamics, it has been proposed that plasticity must be highly gated and synaptic strengths limited. While solving the issue of stability, gating and limiting plasticity does not solve the stability-plasticity dilemma. We propose that dendrites enable both stable network dynamics and considerable synaptic changes, as they allow the gating of plasticity in a compartment-specific manner. We investigate how gating plasticity influences network stability in plastic balanced spiking networks of neurons with dendrites. We compare how different ways to gate plasticity, namely via modulating excitability, learning rate, and inhibition increase stability. We investigate how dendritic versus perisomatic gating allows for different amounts of weight changes in stable networks. We suggest that the compartmentalisation of pyramidal cells enables dendritic synaptic changes while maintaining stability. We show that the coupling between dendrite and soma is critical for the plasticity-stability trade-off. Finally, we show that spatially restricted plasticity additionally improves stability.
Delamare G, Zaki Y, Cai DJ, et al., 2023, Drift of neural ensembles driven by slow fluctuations of intrinsic excitability, eLife, ISSN: 2050-084X
Representational drift refers to the dynamic nature of neural representations in the brain despitethe behavior being seemingly stable. Although drift has been observed in many different brain regions, the mechanisms underlying it are not known. Since intrinsic neural excitability is suggested toplay a key role in regulating memory allocation, fluctuations of excitability could bias the reactivation of previously stored memory ensembles and therefore act as a motor for drift. Here, we proposea rate-based plastic recurrent neural network with slow fluctuations of intrinsic excitability. We firstshow that subsequent reactivations of a neural ensemble can lead to drift of this ensemble. Themodel predicts that drift is induced by co-activation of previously active neurons along with neuronswith high excitability which leads to remodelling of the recurrent weights. Consistent with previousexperimental works, the drifting ensemble is informative about its temporal history. Crucially, weshow that the gradual nature of the drift is necessary for decoding temporal information from theactivity of the ensemble. Finally, we show that the memory is preserved and can be decoded by anoutput neuron having plastic synapses with the main region.
Zador A, Escola S, Richards B, et al., 2023, Catalyzing next-generation Artificial Intelligence through NeuroAI, Nature Communications, Vol: 14, Pages: 1-7, ISSN: 2041-1723
Neuroscience has long been an essential driver of progress in artificial intelligence (AI). We propose that to accelerate progress in AI, we must invest in fundamental research in NeuroAI. A core component of this is the embodied Turing test, which challenges AI animal models to interact with the sensorimotor world at skill levels akin to their living counterparts. The embodied Turing test shifts the focus from those capabilities like game playing and language that are especially well-developed or uniquely human to those capabilities - inherited from over 500 million years of evolution - that are shared with all animals. Building models that can pass the embodied Turing test will provide a roadmap for the next generation of AI.
Bono J, Zannone S, Pedrosa V, et al., 2023, Learning predictive cognitive maps with spiking neurons during behaviour and replays, eLife, Vol: 12, ISSN: 2050-084X
The hippocampus has been proposed to encode environments using a representation that contains predictive information about likely future states, called the successor representation. However, it is not clear how such a representation could be learned in the hippocampal circuit. Here, we propose a plasticity rule that can learn this predictive map of the environment using a spiking neural network. We connect this biologically plausible plasticity rule to reinforcement learning, mathematically and numerically showing that it implements the TD-lambda algorithm. By spanning these different levels, we show how our framework naturally encompasses behavioral activity and replays, smoothly moving from rate to temporal coding, and allows learning over behavioral timescales with a plasticity rule acting on a timescale of milliseconds. We discuss how biological parameters such as dwelling times at states, neuronal firing rates and neuromodulation relate to the delay discounting parameter of the TD algorithm, and how they influence the learned representation. We also find that, in agreement with psychological studies and contrary to reinforcement learning theory, the discount factor decreases hyperbolically with time. Finally, our framework suggests a role for replays, in both aiding learning in novel environments and finding shortcut trajectories that were not experienced during behavior, in agreement with experimental data.
Fuchsberger T, Clopath C, Jarzebowski P, et al., 2022, Postsynaptic burst reactivation of hippocampal neurons enables associative plasticity of temporally discontiguous inputs, ELIFE, Vol: 11, ISSN: 2050-084X
Feulner B, Perich MG, Chowdhury RH, et al., 2022, Small, correlated changes in synaptic connectivity may facilitate rapid motor learning, Nature Communications, Vol: 13, ISSN: 2041-1723
Animals can rapidly adapt their movements to external perturbations. This adaptation is paralleled by changes in single neuron activity in the motor cortices. Behavioural and neural recording studies suggest that when animals learn to counteract a visuomotor perturbation, these changes originate from altered inputs to the motor cortices rather than from changes in local connectivity, as neural covariance is largely preserved during adaptation. Since measuring synaptic changes in vivo remains very challenging, weused a modular recurrent network model to compare the expected neural activity changes following learning through altered inputs (Hinput) and learning through local connectivity changes (Hlocal). Learning under Hinput produced small changes in neural activity and largely preserved the neural covariance, in good agreement with neural recordings in monkeys. Surprisingly given the presumed dependence of stable neural covariance onpreserved circuit connectivity, Hlocal led to only slightly larger changes in neural activity and covariance compared to Hinput. This similarity is due to Hlocal only requiring small, correlated connectivity changes to counteract the perturbation, which provided the network with significant robustness against simulated synaptic noise. Simulations of tasks that impose increasingly larger behavioural changes revealed a growing difference betweenHinput and Hlocal, which could be exploited when designing future experiments.
Sadeh S, Clopath C, 2022, Contribution of behavioural variability to representational drift, eLife, Vol: 11, Pages: 1-28, ISSN: 2050-084X
Neuronal responses to similar stimuli change dynamically over time, raising the question of how internal representations can provide a stable substrate for neural coding. Recent work has suggested a large degree of drift in neural representations even in sensory cortices, which are believed to store stable representations of the external world. While the drift of these representations is mostly characterized in relation to external stimuli, the behavioural state of the animal (for instance, the level of arousal) is also known to strongly modulate the neural activity. We therefore asked how the variability of such modulatory mechanisms can contribute to representational changes. We analysed large-scale recording of neural activity from the Allen Brain Observatory, which was used before to document representational drift in the mouse visual cortex. We found that, within these datasets, behavioural variability significantly contributes to representational changes. This effect was broadcasted across various cortical areas in the mouse, including the primary visual cortex, higher order visual areas, and even regions not primarily linked to vision like hippocampus. Our computational modelling suggests that these results are consistent with independent modulation of neural activity by behaviour over slower time scales. Importantly, our analysis suggests that reliable but variable modulation of neural representations by behaviour can be misinterpreted as representational drift, if neuronal representations are only characterized in the stimulus space and marginalised over behavioural parameters.
Hashemi P, Clopath C, Reneaux M, et al., 2022, A tale of two transmitters: serotonin and histamine as in vivo biomarkers of chronic stress in mice, Journal of Neuroinflammation, Vol: 19, ISSN: 1742-2094
Background: Stress-induced mental illnesses (mediated by neuroinflammation) pose one of the world’s most urgent public health challenges. A reliable in vivo chemical biomarker of stress would significantly improve the clinical communities’ diagnostic and therapeutic approaches to illnesses like depression. Methods: Male and female C57BL/6J mice underwent a chronic stress paradigm. We paired innovative in vivo serotonin and histamine voltammetric measurement technologies, behavioral testing, and cutting-edge mathematical methods to correlate chemistry to stress and behavior. Results: Inflammation-induced increases in hypothalamic histamine were co-measured withdecreased in vivo extracellular hippocampal serotonin in mice that underwent a chronic stress paradigm, regardless of behavioral phenotype. In animals with depression phenotypes, correlations were found between serotonin and the extent of behavioral indices of depression. We created a high accuracy algorithm that could predict whether animals had been exposed to stress or not based solely on the serotonin measurement. We next developed a model of serotonin and histamine modulation, which predicted that stress-induced neuroinflammation increases histaminergic activity, serving to inhibit serotonin. Finally, we created a mathematical index of stress, Si and predicted that during chronic stress, where Si is high, simultaneously increasing serotonin and decreasing histamine is the most effective chemical strategy to restoring serotonin to pre-stress levels. When we pursued this idea pharmacologically, our experiments were nearly identical to the model’s predictions. Conclusions: This work shines the light on two biomarkers of chronic stress, histamine and serotonin, and implies that both may be important in our future investigations of the pathology and treatment of inflammation-induced depression.
Wert-Carvajal C, Reneaux M, Tchumatchenko T, et al., 2022, Dopamine and serotonin interplay for valence-based spatial learning, CELL REPORTS, Vol: 39, ISSN: 2211-1247
Hertäg L, Clopath C, 2022, Prediction-error neurons in circuits with multiple neuron types: formation, refinement, and functional implications., Proceedings of the National Academy of Sciences of USA, Vol: 119, Pages: e2115699119-e2115699119, ISSN: 0027-8424
SignificanceAn influential idea in neuroscience is that neural circuits do not only passively process sensory information but rather actively compare them with predictions thereof. A core element of this comparison is prediction-error neurons, the activity of which only changes upon mismatches between actual and predicted sensory stimuli. While it has been shown that these prediction-error neurons come in different variants, it is largely unresolved how they are simultaneously formed and shaped by highly interconnected neural networks. By using a computational model, we study the circuit-level mechanisms that give rise to different variants of prediction-error neurons. Our results shed light on the formation, refinement, and robustness of prediction-error circuits, an important step toward a better understanding of predictive processing.
Wood KC, Angeloni CF, Oxman K, et al., 2022, Neuronal activity in sensory cortex predicts the specificity of learning in mice, Nature Communications, Vol: 13, ISSN: 2041-1723
Learning to avoid dangerous signals while preserving normal responses to safe stimuli is essential for everyday behavior and survival. Following identical experiences, subjects exhibit fear specificity ranging from high (specializing fear to only the dangerous stimulus) to low (generalizing fear to safe stimuli), yet the neuronal basis of fear specificity remains unknown. Here, we identified the neuronal code that underlies inter-subject variability in fear specificity using longitudinal imaging of neuronal activity before and after differential fear conditioning in the auditory cortex of mice. Neuronal activity prior to, but not after learning predicted the level of specificity following fear conditioning across subjects. Stimulus representation in auditory cortex was reorganized following conditioning. However, the reorganized neuronal activity did not relate to the specificity of learning. These results present a novel neuronal code that determines individual patterns in learning.
Poort J, Wilmes KA, Blot A, et al., 2022, Learning and attention increase visual response selectivity through distinct mechanisms., Neuron, Vol: 110, Pages: 689-697.e6, ISSN: 0896-6273
Selectivity of cortical neurons for sensory stimuli can increase across days as animals learn their behavioral relevance and across seconds when animals switch attention. While both phenomena occur in the same circuit, it is unknown whether they rely on similar mechanisms. We imaged primary visual cortex as mice learned a visual discrimination task and subsequently performed an attention switching task. Selectivity changes due to learning and attention were uncorrelated in individual neurons. Selectivity increases after learning mainly arose from selective suppression of responses to one of the stimuli but from selective enhancement and suppression during attention. Learning and attention differentially affected interactions between excitatory and PV, SOM, and VIP inhibitory cells. Circuit modeling revealed that cell class-specific top-down inputs best explained attentional modulation, while reorganization of local functional connectivity accounted for learning-related changes. Thus, distinct mechanisms underlie increased discriminability of relevant sensory stimuli across longer and shorter timescales.
Feitosa Tome D, Sadeh S, Clopath C, 2022, Coordinated hippocampal-thalamic-cortical communication crucial for engram dynamics underneath systems consolidation, Nature Communications, Vol: 13, ISSN: 2041-1723
Systems consolidation refers to the time-dependent reorganization of memory representations or engrams across brain regions. Despite recent advancements in unravelling this process, the exact mechanisms behind engram dynamics and the role of associated pathways remain largely unknown. Here we propose a biologically-plausible computational model to address this knowledge gap. By coordinating synaptic plasticity timescales and incorporating a hippocampus-thalamus-cortex circuit, our model is able to couple engram reactivations across these regions and thereby reproduce key dynamics of cortical and hippocampal engram cells along with their interdependencies. Decoupling hippocampal-thalamic-cortical activity disrupts systems consolidation. Critically, our model yields testable predictions regarding hippocampal and thalamic engram cells, inhibitory engrams, thalamic inhibitory input, and the effect of thalamocortical synaptic coupling on retrograde amnesia induced by hippocampal lesions. Overall, our results suggest that systems consolidation emerges from coupled reactivations of engram cells in distributed brain regions enabled by coordinated synaptic plasticity timescales in multisynaptic subcortical-cortical circuits.
Boboeva V, Clopath C, 2021, Free recall scaling laws and short-term memory effects in a latching attractor network, Proceedings of the National Academy of Sciences of USA, Vol: 118, Pages: 1-10, ISSN: 0027-8424
Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics,such as power laws governing recall capacity. Although abstractrandom matrix models could explain such laws, the possibility oftheir implementation in large networks of interacting neurons has sofar remained underexplored. We study an attractor network modelof long-term memory endowed with firing rate adaptation and globalinhibition. Under appropriate conditions, the transitioning behaviourof the network from memory to memory is constrained by limit cyclesthat prevent the network from recalling all memories, with scalingsimilar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing thestandard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the freerecall literature, namely serial position effects, contiguity and forwardasymmetry effects, as well as the semantic effects found to guidememory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variablesthat affect recall, such as the role of rehearsal, presentation ratesand (continuous/end-of-list) distractor conditions. We predict thatrecall capacity may be increased with the addition of small amountsof noise, for example in the form of weak random stimuli during recall. Finally, we predict that although the statistics of the encodedmemories has a strong effect on the recall capacity, the power lawsgoverning recall capacity may still be expected to hold.
Geiller T, Sadeh S, Clopath C, et al., 2021, Local circuit amplification of spatial selectivity in the hippocampus, Nature, Vol: 601, Pages: 105-109, ISSN: 0028-0836
Local circuit architecture facilitates the emergence of feature selectivity in the cerebral cortex1. In the hippocampus, it remains unknown whether local computations supported by specific connectivity motifs2 regulate the spatial receptive fields of pyramidal cells3. Here we developed an in vivo electroporation method for monosynaptic retrograde tracing4 and optogenetics manipulation at single-cell resolution to interrogate the dynamic interaction of place cells with their microcircuitry during navigation. We found a local circuit mechanism in CA1 whereby the spatial tuning of an individual place cell can propagate to a functionally recurrent subnetwork5 to which it belongs. The emergence of place fields in individual neurons led to the development of inverse selectivity in a subset of their presynaptic interneurons, and recruited functionally coupled place cells at that location. Thus, the spatial selectivity of single CA1 neurons is amplified through local circuit plasticity to enable effective multi-neuronal representations that can flexibly scale environmental features locally without degrading the feedforward input structure.
Gallinaro JV, Clopath C, 2021, Memories in a network with excitatory and inhibitory plasticity are encoded in the spiking irregularity, PLoS Computational Biology, Vol: 17, Pages: 1-19, ISSN: 1553-734X
Cell assemblies are thought to be the substrate of memory in the brain. Theoretical studies have previously shown that assemblies can be formed in networks with multiple types of plasticity. But how exactly they are formed and how they encode information is yet to be fully understood. One possibility is that memories are stored in silent assemblies. Here we used a computational model to study the formation of silent assemblies in a network of spiking neurons with excitatory and inhibitory plasticity. We found that even though the formed assemblies were silent in terms of mean firing rate, they had an increased coefficient of variation of inter-spike intervals. We also found that this spiking irregularity could be read out with support of short-term plasticity, and that it could contribute to the longevity of memories.
Sadeh S, Clopath C, 2021, Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks, Science Advances, Vol: 7, Pages: 1-16, ISSN: 2375-2548
Repetitive activation of subpopulations of neurons leads to the formation of neuronal assemblies, which can guide learning and behavior. Recent technological advances have made the artificial induction of such assemblies feasible, yet how various parameters of perturbation can be optimized for such induction is not clear. We found that the regime of cortical networks in terms of their excitatory-inhibitory balance can modulate the formation and dynamics of assemblies. Networks with dominant excitatory interactions enabled a fast formation of assemblies, and this was accompanied by recruitment of other non-perturbed neurons, thus leading to some degree of nonspecific assembly formation. On the other hand, strong excitatory-inhibitory interaction recruited lateral inhibition, which slowed down the formation of assemblies but constrained them to the perturbed neurons. Our results suggest that these two regimes can be suitable for different computational and cognitive tasks with different trade-offs between speed and specificity. More generally, our work provides a framework to study network-wide behaviorally-relevant plasticity in biologically realistic networks.
Prince LY, Bacon T, Humphries R, et al., 2021, Separable actions of acetylcholine and noradrenaline on neuronal ensemble formation in hippocampal CA3 circuits, PLoS Computational Biology, Vol: 17, Pages: 1-37, ISSN: 1553-734X
In the hippocampus, episodic memories are thought to be encoded by the formation of ensembles of synaptically coupled CA3 pyramidal cells driven by sparse but powerful mossy fiber inputs from dentate gyrus granule cells. The neuromodulators acetylcholine and noradrenaline are separately proposed as saliency signals that dictate memory encoding but it is not known if they represent distinct signals with separate mechanisms. Here, we show experimentally that acetylcholine, and to a lesser extent noradrenaline, suppress feed-forward inhibition and enhance Excitatory–Inhibitory ratio in the mossy fiber pathway but CA3 recurrent network properties are only altered by acetylcholine. We explore the implications of these findings on CA3 ensemble formation using a hierarchy of models. In reconstructions of CA3 pyramidal cells, mossy fiber pathway disinhibition facilitates postsynaptic dendritic depolarization known to be required for synaptic plasticity at CA3-CA3 recurrent synapses. We further show in a spiking neural network model of CA3 how acetylcholine-specific network alterations can drive rapid overlapping ensemble formation. Thus, through these distinct sets of mechanisms, acetylcholine and noradrenaline facilitate the formation of neuronal ensembles in CA3 that encode salient episodic memories in the hippocampus but acetylcholine selectively enhances the density of memory storage.
Kaleb K, Pedrosa V, Clopath C, 2021, Network-centered homeostasis through inhibition maintains hippocampal spatial map and cortical circuit function, CELL REPORTS, Vol: 36, ISSN: 2211-1247
Gogianu F, Berariu T, Rosca M, et al., 2021, Spectral normalisation for deep reinforcement learning: an optimisation perspective, International Conference on Machine Learning (ICML), Publisher: JMLR-JOURNAL MACHINE LEARNING RESEARCH, Pages: 1-11, ISSN: 2640-3498
Most of the recent deep reinforcement learningadvances take an RL-centric perspective and focus on refinements of the training objective. Wediverge from this view and show we can recoverthe performance of these developments not bychanging the objective, but by regularising thevalue-function estimator. Constraining the Lipschitz constant of a single layer using spectralnormalisation is sufficient to elevate the performance of a Categorical-DQN agent to that of amore elaborated RAINBOW agent on the challenging Atari domain. We conduct ablation studiesto disentangle the various effects normalisationhas on the learning dynamics and show that issufficient to modulate the parameter updates torecover most of the performance of spectral normalisation. These findings hint towards the needto also focus on the neural component and itslearning dynamics to tackle the peculiarities ofDeep Reinforcement Learning.
Ang GWY, Tang CS, Hay YA, et al., 2021, The functional role of sequentially neuromodulated synaptic plasticity in behavioural learning, PLoS Computational Biology, Vol: 17, Pages: 1-22, ISSN: 1553-734X
To survive, animals have to quickly modify their behaviour when the reward changes. The internal representations responsible for this are updated through synaptic weight changes, mediated by certain neuromodulators conveying feedback from the environment. In previous experiments, we discovered a form of hippocampal Spike-Timing-Dependent-Plasticity (STDP) that is sequentially modulated by acetylcholine and dopamine. Acetylcholine facilitates synaptic depression, while dopamine retroactively converts the depression into potentiation. When these experimental findings were implemented as a learning rule in a computational model, our simulations showed that cholinergic-facilitated depression is important for reversal learning. In the present study, we tested the model’s prediction by optogenetically inactivating cholinergic neurons in mice during a hippocampus-dependent spatial learning task with changing rewards. We found that reversal learning, but not initial place learning, was impaired, verifying our computational prediction that acetylcholine-modulated plasticity promotes the unlearning of old reward locations. Further, differences in neuromodulator concentrations in the model captured mouse-by-mouse performance variability in the optogenetic experiments. Our line of work sheds light on how neuromodulators enable the learning of new contingencies.
Maes A, Barahona M, Clopath C, 2021, Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons, PLoS Computational Biology, Vol: 17, ISSN: 1553-734X
Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.
Zenke F, Bohté SM, Clopath C, et al., 2021, Visualizing a joint future of neuroscience and neuromorphic engineering, Neuron, Vol: 109, Pages: 571-575, ISSN: 0896-6273
Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.
Feulner B, Clopath C, 2021, Neural manifold under plasticity in a goal driven learning behaviour, PLoS Computational Biology, Vol: 17, Pages: 1-27, ISSN: 1553-734X
Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.
Sadeh S, Clopath C, 2020, Inhibitory stabilization and cortical computation, Nature Reviews Neuroscience, ISSN: 1471-003X
Sadeh S, Clopath C, 2020, Theory of neuronal perturbome in cortical networks., Proceedings of the National Academy of Sciences of USA, Vol: 117, Pages: 26966-26976, ISSN: 0027-8424
To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.
Udakis M, Pedrosa V, Chamberlain SEL, et al., 2020, Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output, Nature Communications, Vol: 11, Pages: 4395-4395, ISSN: 2041-1723
The formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.
This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.