Imperial College London

Professor Claudia Clopath

Faculty of EngineeringDepartment of Bioengineering

Professor of Computational Neuroscience



+44 (0)20 7594 1435c.clopath Website




Royal School of Mines 4.09Royal School of MinesSouth Kensington Campus





Publication Type

65 results found

Poort J, Wilmes KA, Blot A, Chadwick A, Sahani M, Clopath C, Mrsic-Flogel TD, Hofer SB, Khan AGet al., 2022, Learning and attention increase visual response selectivity through distinct mechanisms., Neuron, Vol: 110, Pages: 689-697.e6, ISSN: 0896-6273

Selectivity of cortical neurons for sensory stimuli can increase across days as animals learn their behavioral relevance and across seconds when animals switch attention. While both phenomena occur in the same circuit, it is unknown whether they rely on similar mechanisms. We imaged primary visual cortex as mice learned a visual discrimination task and subsequently performed an attention switching task. Selectivity changes due to learning and attention were uncorrelated in individual neurons. Selectivity increases after learning mainly arose from selective suppression of responses to one of the stimuli but from selective enhancement and suppression during attention. Learning and attention differentially affected interactions between excitatory and PV, SOM, and VIP inhibitory cells. Circuit modeling revealed that cell class-specific top-down inputs best explained attentional modulation, while reorganization of local functional connectivity accounted for learning-related changes. Thus, distinct mechanisms underlie increased discriminability of relevant sensory stimuli across longer and shorter timescales.

Journal article

Feitosa Tome D, Sadeh S, Clopath C, 2022, Coordinated hippocampal-thalamic-cortical communication crucial for engram dynamics underneath systems consolidation, Nature Communications, Vol: 13, ISSN: 2041-1723

Systems consolidation refers to the time-dependent reorganization of memory representations or engrams across brain regions. Despite recent advancements in unravelling this process, the exact mechanisms behind engram dynamics and the role of associated pathways remain largely unknown. Here we propose a biologically-plausible computational model to address this knowledge gap. By coordinating synaptic plasticity timescales and incorporating a hippocampus-thalamus-cortex circuit, our model is able to couple engram reactivations across these regions and thereby reproduce key dynamics of cortical and hippocampal engram cells along with their interdependencies. Decoupling hippocampal-thalamic-cortical activity disrupts systems consolidation. Critically, our model yields testable predictions regarding hippocampal and thalamic engram cells, inhibitory engrams, thalamic inhibitory input, and the effect of thalamocortical synaptic coupling on retrograde amnesia induced by hippocampal lesions. Overall, our results suggest that systems consolidation emerges from coupled reactivations of engram cells in distributed brain regions enabled by coordinated synaptic plasticity timescales in multisynaptic subcortical-cortical circuits.

Journal article

Boboeva V, Clopath C, 2021, Free recall scaling laws and short-term memory effects in a latching attractor network, Proceedings of the National Academy of Sciences of USA, Vol: 118, Pages: 1-10, ISSN: 0027-8424

Despite the complexity of human memory, paradigms like free recall have revealed robust qualitative and quantitative characteristics,such as power laws governing recall capacity. Although abstractrandom matrix models could explain such laws, the possibility oftheir implementation in large networks of interacting neurons has sofar remained underexplored. We study an attractor network modelof long-term memory endowed with firing rate adaptation and globalinhibition. Under appropriate conditions, the transitioning behaviourof the network from memory to memory is constrained by limit cyclesthat prevent the network from recalling all memories, with scalingsimilar to what has been found in experiments. When the model is supplemented with a heteroassociative learning rule, complementing thestandard autoassociative learning rule, as well as short-term synaptic facilitation, our model reproduces other key findings in the freerecall literature, namely serial position effects, contiguity and forwardasymmetry effects, as well as the semantic effects found to guidememory recall. The model is consistent with a broad series of manipulations aimed at gaining a better understanding of the variablesthat affect recall, such as the role of rehearsal, presentation ratesand (continuous/end-of-list) distractor conditions. We predict thatrecall capacity may be increased with the addition of small amountsof noise, for example in the form of weak random stimuli during recall. Finally, we predict that although the statistics of the encodedmemories has a strong effect on the recall capacity, the power lawsgoverning recall capacity may still be expected to hold.

Journal article

Geiller T, Sadeh S, Clopath C, Losonczy Aet al., 2021, Local circuit amplification of spatial selectivity in the hippocampus, Nature, Vol: 601, Pages: 105-109, ISSN: 0028-0836

Local circuit architecture facilitates the emergence of feature selectivity in the cerebral cortex1. In the hippocampus, it remains unknown whether local computations supported by specific connectivity motifs2 regulate the spatial receptive fields of pyramidal cells3. Here we developed an in vivo electroporation method for monosynaptic retrograde tracing4 and optogenetics manipulation at single-cell resolution to interrogate the dynamic interaction of place cells with their microcircuitry during navigation. We found a local circuit mechanism in CA1 whereby the spatial tuning of an individual place cell can propagate to a functionally recurrent subnetwork5 to which it belongs. The emergence of place fields in individual neurons led to the development of inverse selectivity in a subset of their presynaptic interneurons, and recruited functionally coupled place cells at that location. Thus, the spatial selectivity of single CA1 neurons is amplified through local circuit plasticity to enable effective multi-neuronal representations that can flexibly scale environmental features locally without degrading the feedforward input structure.

Journal article

Gallinaro JV, Clopath C, 2021, Memories in a network with excitatory and inhibitory plasticity are encoded in the spiking irregularity, PLoS Computational Biology, Vol: 17, Pages: 1-19, ISSN: 1553-734X

Cell assemblies are thought to be the substrate of memory in the brain. Theoretical studies have previously shown that assemblies can be formed in networks with multiple types of plasticity. But how exactly they are formed and how they encode information is yet to be fully understood. One possibility is that memories are stored in silent assemblies. Here we used a computational model to study the formation of silent assemblies in a network of spiking neurons with excitatory and inhibitory plasticity. We found that even though the formed assemblies were silent in terms of mean firing rate, they had an increased coefficient of variation of inter-spike intervals. We also found that this spiking irregularity could be read out with support of short-term plasticity, and that it could contribute to the longevity of memories.

Journal article

Sadeh S, Clopath C, 2021, Excitatory-inhibitory balance modulates the formation and dynamics of neuronal assemblies in cortical networks, Science Advances, Vol: 7, Pages: 1-16, ISSN: 2375-2548

Repetitive activation of subpopulations of neurons leads to the formation of neuronal assemblies, which can guide learning and behavior. Recent technological advances have made the artificial induction of such assemblies feasible, yet how various parameters of perturbation can be optimized for such induction is not clear. We found that the regime of cortical networks in terms of their excitatory-inhibitory balance can modulate the formation and dynamics of assemblies. Networks with dominant excitatory interactions enabled a fast formation of assemblies, and this was accompanied by recruitment of other non-perturbed neurons, thus leading to some degree of nonspecific assembly formation. On the other hand, strong excitatory-inhibitory interaction recruited lateral inhibition, which slowed down the formation of assemblies but constrained them to the perturbed neurons. Our results suggest that these two regimes can be suitable for different computational and cognitive tasks with different trade-offs between speed and specificity. More generally, our work provides a framework to study network-wide behaviorally-relevant plasticity in biologically realistic networks.

Journal article

Prince LY, Bacon T, Humphries R, Tsaneva-Atanasova K, Clopath C, Mellor JRet al., 2021, Separable actions of acetylcholine and noradrenaline on neuronal ensemble formation in hippocampal CA3 circuits, PLoS Computational Biology, Vol: 17, Pages: 1-37, ISSN: 1553-734X

In the hippocampus, episodic memories are thought to be encoded by the formation of ensembles of synaptically coupled CA3 pyramidal cells driven by sparse but powerful mossy fiber inputs from dentate gyrus granule cells. The neuromodulators acetylcholine and noradrenaline are separately proposed as saliency signals that dictate memory encoding but it is not known if they represent distinct signals with separate mechanisms. Here, we show experimentally that acetylcholine, and to a lesser extent noradrenaline, suppress feed-forward inhibition and enhance Excitatory–Inhibitory ratio in the mossy fiber pathway but CA3 recurrent network properties are only altered by acetylcholine. We explore the implications of these findings on CA3 ensemble formation using a hierarchy of models. In reconstructions of CA3 pyramidal cells, mossy fiber pathway disinhibition facilitates postsynaptic dendritic depolarization known to be required for synaptic plasticity at CA3-CA3 recurrent synapses. We further show in a spiking neural network model of CA3 how acetylcholine-specific network alterations can drive rapid overlapping ensemble formation. Thus, through these distinct sets of mechanisms, acetylcholine and noradrenaline facilitate the formation of neuronal ensembles in CA3 that encode salient episodic memories in the hippocampus but acetylcholine selectively enhances the density of memory storage.

Journal article

Kaleb K, Pedrosa V, Clopath C, 2021, Network-centered homeostasis through inhibition maintains hippocampal spatial map and cortical circuit function, CELL REPORTS, Vol: 36, ISSN: 2211-1247

Journal article

Gogianu F, Berariu T, Rosca M, Clopath C, Busoniu L, Pascanu Ret al., 2021, Spectral normalisation for deep reinforcement learning: an optimisation perspective, International Conference on Machine Learning (ICML), Publisher: JMLR-JOURNAL MACHINE LEARNING RESEARCH, ISSN: 2640-3498

Most of the recent deep reinforcement learningadvances take an RL-centric perspective and focus on refinements of the training objective. Wediverge from this view and show we can recoverthe performance of these developments not bychanging the objective, but by regularising thevalue-function estimator. Constraining the Lipschitz constant of a single layer using spectralnormalisation is sufficient to elevate the performance of a Categorical-DQN agent to that of amore elaborated RAINBOW agent on the challenging Atari domain. We conduct ablation studiesto disentangle the various effects normalisationhas on the learning dynamics and show that issufficient to modulate the parameter updates torecover most of the performance of spectral normalisation. These findings hint towards the needto also focus on the neural component and itslearning dynamics to tackle the peculiarities ofDeep Reinforcement Learning.

Conference paper

Ang GWY, Tang CS, Hay YA, Zannone S, Paulsen O, Clopath Cet al., 2021, The functional role of sequentially neuromodulated synaptic plasticity in behavioural learning, PLoS Computational Biology, Vol: 17, Pages: 1-22, ISSN: 1553-734X

To survive, animals have to quickly modify their behaviour when the reward changes. The internal representations responsible for this are updated through synaptic weight changes, mediated by certain neuromodulators conveying feedback from the environment. In previous experiments, we discovered a form of hippocampal Spike-Timing-Dependent-Plasticity (STDP) that is sequentially modulated by acetylcholine and dopamine. Acetylcholine facilitates synaptic depression, while dopamine retroactively converts the depression into potentiation. When these experimental findings were implemented as a learning rule in a computational model, our simulations showed that cholinergic-facilitated depression is important for reversal learning. In the present study, we tested the model’s prediction by optogenetically inactivating cholinergic neurons in mice during a hippocampus-dependent spatial learning task with changing rewards. We found that reversal learning, but not initial place learning, was impaired, verifying our computational prediction that acetylcholine-modulated plasticity promotes the unlearning of old reward locations. Further, differences in neuromodulator concentrations in the model captured mouse-by-mouse performance variability in the optogenetic experiments. Our line of work sheds light on how neuromodulators enable the learning of new contingencies.

Journal article

Maes A, Barahona M, Clopath C, 2021, Learning compositional sequences with multiple time scales through a hierarchical network of spiking neurons, PLoS Computational Biology, Vol: 17, ISSN: 1553-734X

Sequential behaviour is often compositional and organised across multiple time scales: a set of individual elements developing on short time scales (motifs) are combined to form longer functional sequences (syntax). Such organisation leads to a natural hierarchy that can be used advantageously for learning, since the motifs and the syntax can be acquired independently. Despite mounting experimental evidence for hierarchical structures in neuroscience, models for temporal learning based on neuronal networks have mostly focused on serial methods. Here, we introduce a network model of spiking neurons with a hierarchical organisation aimed at sequence learning on multiple time scales. Using biophysically motivated neuron dynamics and local plasticity rules, the model can learn motifs and syntax independently. Furthermore, the model can relearn sequences efficiently and store multiple sequences. Compared to serial learning, the hierarchical model displays faster learning, more flexible relearning, increased capacity, and higher robustness to perturbations. The hierarchical model redistributes the variability: it achieves high motif fidelity at the cost of higher variability in the between-motif timings.

Journal article

Zenke F, Bohté SM, Clopath C, Comşa IM, Göltz J, Maass W, Masquelier T, Naud R, Neftci EO, Petrovici MA, Scherr F, Goodman DFMet al., 2021, Visualizing a joint future of neuroscience and neuromorphic engineering, Neuron, Vol: 109, Pages: 571-575, ISSN: 0896-6273

Recent research resolves the challenging problem of building biophysically plausible spiking neural models that are also capable of complex information processing. This advance creates new opportunities in neuroscience and neuromorphic engineering, which we discussed at an online focus meeting.

Journal article

Feulner B, Clopath C, 2021, Neural manifold under plasticity in a goal driven learning behaviour, PLoS Computational Biology, Vol: 17, Pages: 1-27, ISSN: 1553-734X

Neural activity is often low dimensional and dominated by only a few prominent neural covariation patterns. It has been hypothesised that these covariation patterns could form the building blocks used for fast and flexible motor control. Supporting this idea, recent experiments have shown that monkeys can learn to adapt their neural activity in motor cortex on a timescale of minutes, given that the change lies within the original low-dimensional subspace, also called neural manifold. However, the neural mechanism underlying this within-manifold adaptation remains unknown. Here, we show in a computational model that modification of recurrent weights, driven by a learned feedback signal, can account for the observed behavioural difference between within- and outside-manifold learning. Our findings give a new perspective, showing that recurrent weight changes do not necessarily lead to change in the neural manifold. On the contrary, successful learning is naturally constrained to a common subspace.

Journal article

Sadeh S, Clopath C, 2020, Inhibitory stabilization and cortical computation, Nature Reviews Neuroscience, ISSN: 1471-003X

Journal article

Sadeh S, Clopath C, 2020, Theory of neuronal perturbome in cortical networks., Proceedings of the National Academy of Sciences of USA, Vol: 117, Pages: 26966-26976, ISSN: 0027-8424

To unravel the functional properties of the brain, we need to untangle how neurons interact with each other and coordinate in large-scale recurrent networks. One way to address this question is to measure the functional influence of individual neurons on each other by perturbing them in vivo. Application of such single-neuron perturbations in mouse visual cortex has recently revealed feature-specific suppression between excitatory neurons, despite the presence of highly specific excitatory connectivity, which was deemed to underlie feature-specific amplification. Here, we studied which connectivity profiles are consistent with these seemingly contradictory observations, by modeling the effect of single-neuron perturbations in large-scale neuronal networks. Our numerical simulations and mathematical analysis revealed that, contrary to the prima facie assumption, neither inhibition dominance nor broad inhibition alone were sufficient to explain the experimental findings; instead, strong and functionally specific excitatory-inhibitory connectivity was necessary, consistent with recent findings in the primary visual cortex of rodents. Such networks had a higher capacity to encode and decode natural images, and this was accompanied by the emergence of response gain nonlinearities at the population level. Our study provides a general computational framework to investigate how single-neuron perturbations are linked to cortical connectivity and sensory coding and paves the road to map the perturbome of neuronal networks in future studies.

Journal article

Udakis M, Pedrosa V, Chamberlain SEL, Clopath C, Mellor JRet al., 2020, Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output, Nature Communications, Vol: 11, Pages: 4395-4395, ISSN: 2041-1723

The formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.

Journal article

Pedrosa V, Clopath C, 2020, The interplay between somatic and dendritic inhibition promotes the emergence and stabilization of place fields, PLoS Computational Biology, Vol: 16, ISSN: 1553-734X

During the exploration of novel environments, place fields are rapidly formed in hippocampal CA1 neurons. Place cell firing rate increases in early stages of exploration of novel environments but returns to baseline levels in familiar environments. Although similar in amplitude and width, place fields in familiar environments are more stable than in novel environments. We propose a computational model of the hippocampal CA1 network, which describes the formation, dynamics and stabilization of place fields. We show that although somatic disinhibition is sufficient to form place fields, dendritic inhibition along with synaptic plasticity is necessary for place field stabilization. Our model suggests that place cell stability can be attributed to strong excitatory synaptic weights and strong dendritic inhibition. We show that the interplay between somatic and dendritic inhibition balances the increased excitatory weights, such that place cells return to their baseline firing rate after exploration. Our model suggests that different types of interneurons are essential to unravel the mechanisms underlying place field plasticity. Finally, we predict that artificially induced dendritic events can shift place fields even after place field stabilization.

Journal article

Tomasev N, Cornebise J, Hutter F, Mohamed S, Picciariello A, Connelly B, Belgrave DCM, Ezer D, van der Haert FC, Mugisha F, Abila G, Arai H, Almiraat H, Proskurnia J, Snyder K, Otake-Matsuura M, Othman M, Glasmachers T, de Wever W, Teh YW, Khan ME, De Winne R, Schaul T, Clopath Cet al., 2020, AI for social good: unlocking the opportunity for positive impact, NATURE COMMUNICATIONS, Vol: 11

Journal article

Clopath C, Sweeney YA, 2020, Population coupling predicts the plasticity of stimulus responses in cortical circuits, eLife, Vol: 9, ISSN: 2050-084X

Some neurons have stimulus responses that are stable over days, whereas other neurons have highly plastic stimulus responses. Using a recurrent network model, we explore whether this could be due to an underlying diversity in their synaptic plasticity. We find that, in a network with diverse learning rates, neurons with fast rates are more coupled to population activity than neurons with slow rates. This plasticity-coupling link predicts that neurons with high population coupling exhibit more long-term stimulus response variability than neurons with low population coupling. We substantiate this prediction using recordings from the Allen Brain Observatory, finding that a neuron’s population coupling is correlated with the plasticity of its orientation preference. Simulations of a simple perceptual learning task suggest a particular functional architecture: a stable ‘backbone’ of stimulus representation formed by neurons with low population coupling, on top of which lies a flexible substrate of neurons with high population coupling.

Journal article

Sadeh S, Clopath C, 2020, Patterned perturbation of inhibition can reveal the dynamical structure of neural processing, eLife, Vol: 9, ISSN: 2050-084X

Perturbation of neuronal activity is key to understanding the brain's functional properties, however, intervention studies typically perturb neurons in a nonspecific manner. Recent optogenetics techniques have enabled patterned perturbations, in which specific patterns of activity can be invoked in identified target neurons to reveal more specific cortical function. Here, we argue that patterned perturbation of neurons is in fact necessary to reveal the specific dynamics of inhibitory stabilization, emerging in cortical networks with strong excitatory and inhibitory functional subnetworks, as recently reported in mouse visual cortex. We propose a specific perturbative signature of these networks and investigate how this can be measured under different experimental conditions. Functionally, rapid spontaneous transitions between selective ensembles of neurons emerge in such networks, consistent with experimental results. Our study outlines the dynamical and functional properties of feature-specific inhibitory-stabilized networks, and suggests experimental protocols that can be used to detect them in the intact cortex.

Journal article

Maes A, Barahona M, Clopath C, 2020, Learning spatiotemporal signals using a recurrent spiking network that discretizes time, PLoS Computational Biology, Vol: 16, Pages: 1-26, ISSN: 1553-734X

Learning to produce spatiotemporal sequences is a common task that the brain has to solve. The same neural substrate may be used by the brain to produce different sequential behaviours. The way the brain learns and encodes such tasks remains unknown as current computational models do not typically use realistic biologically-plausible learning. Here, we propose a model where a spiking recurrent network of excitatory and inhibitory biophysical neurons drives a read-out layer: the dynamics of the driver recurrent network is trained to encode time which is then mapped through the read-out neurons to encode another dimension, such as space or a phase. Different spatiotemporal patterns can be learned and encoded through the synaptic weights to the read-out neurons that follow common Hebbian learning rules. We demonstrate that the model is able to learn spatiotemporal dynamics on time scales that are behaviourally relevant and we show that the learned sequences are robustly replayed during a regime of spontaneous activity.

Journal article

Ebner C, Clopath C, Jedlicka P, Cuntz Het al., 2019, Unifying Long-Term Plasticity Rules for Excitatory Synapses by Modeling Dendrites of Cortical Pyramidal Neurons, CELL REPORTS, Vol: 29, Pages: 4295-+, ISSN: 2211-1247

Journal article

Knopfel T, Sweeney Y, Radulescu CI, Zabouri N, Doostdar N, Clopath C, Barnes Set al., 2019, Audio-visual experience strengthens multisensory assemblies in adult mouse visual cortex, Nature Communications, Vol: 10, ISSN: 2041-1723

We experience the world through multiple senses simultaneously. To better understand mechanisms of multisensory processing we ask whether inputs from two senses (auditory and visual) can interact and drive plasticity in neural-circuits of the primary visual cortex (V1). Using genetically-encoded voltage and calcium indicators, we find coincident audio-visual experience modifies both the supra and subthreshold response properties of neurons in L2/3 of mouse V1. Specifically, we find that after audio-visual pairing, a subset of multimodal neurons develops enhanced auditory responses to the paired auditory stimulus. This cross-modal plasticity persists over days and is reflected in the strengthening of small functional networks of L2/3 neurons. We find V1 processes coincident auditory and visual events by strengthening functional associations between feature specific assemblies of multimodal neurons during bouts of sensory driven co-activity, leaving a trace of multisensory experience in the cortical network.

Journal article

Wilmes KA, Clopath C, 2019, Inhibitory microcircuits for top-down plasticity of sensory representations, Nature Communications, Vol: 10, ISSN: 2041-1723

Rewards influence plasticity of early sensory representations. The underlying changes in cir-cuitry are however unclear. Recent experimental findings suggest that inhibitory circuits regu-late learning. In addition, inhibitory neurons are highly modulated by diverse long-range inputs,including reward signals. We, therefore, hypothesise that inhibitory plasticity plays a major rolein adjusting stimulus representations. We investigate how top-down modulation by rewards in-teracts with local plasticity to induce long-lasting changes in circuitry. Using a computationalmodel of layer 2/3 primary visual cortex, we demonstrate how interneuron circuits can storeinformation about rewarded stimuli to instruct long-term changes in excitatory connectivity inthe absence of further reward. In our model, stimulus-tuned somatostatin-positive interneuronsdevelop strong connections to parvalbumin-positive interneurons during reward such that theyselectively disinhibit the pyramidal layer henceforth. This triggers excitatory plasticity, leadingto increased stimulus representation. We make specific testable predictions and show that thistwo-stage model allows for translation invariance of the learned representation.

Journal article

Richards BA, Lillicrap TP, Beaudoin P, Bengio Y, Bogacz R, Christensen A, Clopath C, Costa RP, de Berker A, Ganguli S, Gillon CJ, Hafner D, Kepecs A, Kriegeskorte N, Latham P, Lindsay GW, Naud R, Pack CC, Poirazi P, Roelfsema P, Sacramento J, Saxe A, Scellier B, Schapiro A, Senn W, Greg W, Yamins D, Zenke F, Zylberberg J, Therien D, Kording KPet al., 2019, A deep learning framework for neuroscience, Nature Neuroscience, Vol: 22, Pages: 1761-1770, ISSN: 1097-6256

Systems neuroscience seeks explanations for how the brain implements a wide variety of perceptual, cognitive and motor tasks. Conversely, artificial intelligence attempts to design computational systems based on the tasks they will have to solve. In the case of artificial neural networks, the three components specified by design are the objective functions, the learning rules, and architectures. With the growing success of deep learning, which utilizes brain-inspired architectures, these three designed components have increasingly become central to how we model, engineer and optimize complex artificial learning systems. Here we argue that a greater focus on these components would also benefit systems neuroscience. We give examples of how this optimization-based framework can drive theoretical and experimental progress in neuroscience. We contend that this principled perspective on systems neuroscience will help to generate more rapid progress.

Journal article

Udakis M, Pedrosa V, Chamberlain SEL, Clopath C, Mellor JRet al., 2019, Interneuron-specific plasticity at parvalbumin and somatostatin inhibitory synapses onto CA1 pyramidal neurons shapes hippocampal output, Publisher: Cold Spring Harbor Laboratory

<jats:title>Summary</jats:title><jats:p>The formation and maintenance of spatial representations within hippocampal cell assemblies is strongly dictated by patterns of inhibition from diverse interneuron populations. Although it is known that inhibitory synaptic strength is malleable, induction of long-term plasticity at distinct inhibitory synapses and its regulation of hippocampal network activity is not well understood. Here, we show that inhibitory synapses from parvalbumin and somatostatin expressing interneurons undergo long-term depression and potentiation respectively (PV-iLTD and SST-iLTP) during physiological activity patterns. Both forms of plasticity rely on T-type calcium channel activation to confer synapse specificity but otherwise employ distinct mechanisms. Since parvalbumin and somatostatin interneurons preferentially target perisomatic and distal dendritic regions respectively of CA1 pyramidal cells, PV-iLTD and SST-iLTP coordinate a reprioritisation of excitatory inputs from entorhinal cortex and CA3. Furthermore, circuit-level modelling reveals that PV-iLTD and SST-iLTP cooperate to stabilise place cells while facilitating representation of multiple unique environments within the hippocampal network.</jats:p>

Working paper

Nicola W, Clopath C, 2019, A diversity of interneurons and Hebbian plasticity facilitate rapid compressible learning in the hippocampus, Nature Neuroscience, Vol: 22, Pages: 1168-1181, ISSN: 1097-6256

The hippocampus is able to rapidly learn incoming information, even if that information is only observed once. Further, this information can be replayed in a compressed format in either forward or reverse modes during Sharp Wave Ripples (SPW-Rs). We leveraged state-of-the-art techniques in training recurrent spiking networks to demonstrate how primarily interneuron networks can: 1) generate internal theta sequences to bind externally elicited spikes in the presence of inhibition from Medial Septum, 2) compress learned spike sequences in the form of a SPW-R when septal inhibition is removed, 3) generate and refine high-frequency assemblies during SPW-R mediated compression, and 4) regulate the inter-SPW-interval timing between SPW-Rs in ripple clusters. From the fast timescale of neurons to the slow timescale of behaviours, interneuron networks serve as the scaffolding for one-shot learning by replaying, reversing, refining, and regulating spike sequences.

Journal article

Bono J, Clopath C, 2019, Synaptic plasticity onto inhibitory neurons as a mechanism for ocular dominance plasticity, PLOS COMPUTATIONAL BIOLOGY, Vol: 15

Journal article

Bouvierm G, Aljadeff J, Clopath C, Bimbard C, Ranft J, Blot A, Nadel J-P, Brunel N, Hakim V, Barbour Bet al., 2018, Cerebellar learning using perturbations, eLife, Vol: 7, ISSN: 2050-084X

The cerebellum aids the learning of fast, coordinated movements. According tocurrent consensus, erroneously active parallel fibre synapses are depressed by complex spikessignalling movement errors. However, this theory cannot solve the credit assignment problem ofprocessing a global movement evaluation into multiple cell-specific error signals. We identify apossible implementation of an algorithm solving this problem, whereby spontaneous complexspikes perturb ongoing movements, create eligibility traces and signal error changes guidingplasticity. Error changes are extracted by adaptively cancelling the average error. This framework,stochastic gradient descent with estimated global errors (SGDEGE), predicts synaptic plasticityrules that apparently contradict the current consensus but were supported by plasticityexperiments in slices from mice under conditions designed to be physiological, highlighting thesensitivity of plasticity studies to experimental conditions. We analyse the algorithm’s convergenceand capacity. Finally, we suggest SGDEGE may also operate in the basal ganglia.

Journal article

Nicola W, Hellyer PJ, Campbell SA, Clopath Cet al., 2018, Chaos in homeostatically regulated neural systems, Chaos, Vol: 28, ISSN: 1054-1500

Low-dimensional yet rich dynamics often emerge in the brain. Examples include oscillations and chaotic dynamics during sleep, epilepsy, and voluntary movement. However, a general mechanism for the emergence of low dimensional dynamics remains elusive. Here, we consider Wilson-Cowan networks and demonstrate through numerical and analytical work that homeostatic regulation of the network firing rates can paradoxically lead to a rich dynamical repertoire. The dynamics include mixed-mode oscillations, mixed-mode chaos, and chaotic synchronization when the homeostatic plasticity operates on a moderately slower time scale than the firing rates. This is true for a single recurrently coupled node, pairs of reciprocally coupled nodes without self-coupling, and networks coupled through experimentally determined weights derived from functional magnetic resonance imaging data. In all cases, the stability of the homeostatic set point is analytically determined or approximated. The dynamics at the network level are directly determined by the behavior of a single node system through synchronization in both oscillatory and non-oscillatory states. Our results demonstrate that rich dynamics can be preserved under homeostatic regulation or even be caused by homeostatic regulation.When recordings from the brain are analyzed, rich dynamics such as oscillations or low-dimensional chaos are often present. However, a general mechanism for how these dynamics emerge remains unresolved. Here, we explore the potential that these dynamics are caused by an interaction between synaptic homeostasis, and the connectivity between distinct populations of neurons. Using both analytical and numerical approaches, we analyze how data derived connection weights interact with inhibitory synaptic homeostasis to create rich dynamics such chaos and oscillations operating on multiple time scales. We demonstrate that these rich dynamical states are present in simple systems such as single population of neurons

Journal article

This data is extracted from the Web of Science and reproduced under a licence from Thomson Reuters. You may not copy or re-distribute this data in whole or in part without the written consent of the Science business of Thomson Reuters.

Request URL: Request URI: /respub/WEB-INF/jsp/search-html.jsp Query String: respub-action=search.html&id=00790252&limit=30&person=true