Classical experiments on spike timing-dependent plasticity (STDP) use a protocol based on pairs of presynaptic and postsynaptic spikes repeated at a given frequency to induce synaptic potentiation or depression. Therefore, standard STDP models have expressed the weight change as a function of pairs of presynaptic and postsynaptic spike. Unfortunately, those paired-based STDP models cannot account for the dependence on the repetition frequency of the pairs of spike. Moreover, those STDP models cannot reproduce recent triplet and quadruplet experiments. Here, we examine a triplet rule (i.e., a rule which considers sets of three spikes, i.e., two pre and one post or one pre and two post) and compare it to classical pair-based STDP learning rules. With such a triplet rule, it is possible to fit experimental data from visual cortical slices as well as from hippocampal cultures. Moreover, when assuming stochastic spike trains, the triplet learning rule can be mapped to a Bienenstock-Cooper-Munro learning rule.
Abstract. In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes via gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up and down regulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike-timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depends on how constraints are implemented in the optimization problem. Two different constraints, i.e., control of postsynaptic rates or control of temporal locality, are studied. The relation of our results to Spike-Timing Dependent Plasticity (STDP) and reinforcement learning is discussed. In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes via gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of upand downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on * 1 spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike-timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depends on how constraints are implemented in the optimization problem. Two different constraints, i.e., control of postsynaptic rates or control of temporal locality, are studied.The relation of our results to Spike-Timing Dependent Plasticity (STDP) and reinforcement learning is discussed.
Synaptic strength depresses for low and potentiates for high activation of the postsynaptic neuron. This feature is a key property of the Bienenstock-Cooper-Munro (BCM) synaptic learning rule, which has been shown to maximize the selectivity of the postsynaptic neuron, and thereby offers a possible explanation for experience-dependent cortical plasticity such as orientation selectivity. However, the BCM framework is rate-based and a significant amount of recent work has shown that synaptic plasticity also depends on the precise timing of presynaptic and postsynaptic spikes. Here we consider a triplet model of spike-timing-dependent plasticity (STDP) that depends on the interactions of three precisely timed spikes. Triplet STDP has been shown to describe plasticity experiments that the classical STDP rule, based on pairs of spikes, has failed to capture. In the case of ratebased patterns, we show a tight correspondence between the triplet STDP rule and the BCM rule. We analytically demonstrate the selectivity property of the triplet STDP rule for orthogonal inputs and perform numerical simulations for nonorthogonal inputs. Moreover, in contrast to BCM, we show that triplet STDP can also induce selectivity for input patterns consisting of higher-order spatiotemporal correlations, which exist in natural stimuli and have been measured in the brain. We show that this sensitivity to higher-order correlations can be used to develop direction and speed selectivity. S ynaptic plasticity depends on the activity of presynaptic and postsynaptic neurons and is believed to provide the basis for learning and memory (1, 2). It has been shown that low-frequency stimulation (1-3 Hz) (3) or stimulation paired with low postsynaptic depolarization (4) induces synaptic long-term depression (LTD), whereas synapses undergo long-term potentiation (LTP) after high-frequency stimulation (100 Hz) (5). Such findings are consistent with the well-known Bienenstock-Cooper-Munro (BCM) learning rule (6). This BCM model has been shown to elicit orientation selectivity and other aspects of experience-dependent cortical plasticity (6, 7). Furthermore, in this model the modification threshold between LTP and LTD varies as a function of the history of postsynaptic activity, a prediction that has been confirmed experimentally (8).Despite its consistency with experimental data and its functional relevance, the BCM framework is still limited experimentally and functionally. Experimentally, because the learning rule is expressed in terms of firing rates, it cannot predict synaptic modification on the basis of the timing of pre-and postsynaptic spikes (9, 10). This form of plasticity, called spike-timing-dependent plasticity (STDP), uses the timing of spike pairs to induce synaptic modification (11,12). The presynaptic spike is required to shortly precede the postsynaptic spike to elicit LTP, whereas the reverse timing of pre-and postsynaptic spikes leads to LTD (9, 10). Functionally, the BCM model cannot segregate input patterns that are characterized by...
Maximization of information transmission by a spiking-neuron model predicts changes of synaptic connections that depend on timing of pre-and postsynaptic spikes and on the postsynaptic membrane potential. Under the assumption of Poisson firing statistics, the synaptic update rule exhibits all of the features of the Bienenstock-Cooper-Munro rule, in particular, regimes of synaptic potentiation and depression separated by a sliding threshold. Moreover, the learning rule is also applicable to the more realistic case of neuron models with refractoriness, and is sensitive to correlations between input spikes, even in the absence of presynaptic rate modulation. The learning rule is found by maximizing the mutual information between presynaptic and postsynaptic spike trains under the constraint that the postsynaptic firing rate stays close to some target firing rate. An interpretation of the synaptic update rule in terms of homeostatic synaptic processes and spiketiming-dependent plasticity is discussed.computational neuroscience ͉ information theory ͉ learning ͉ spiking-neuron model ͉ synaptic plasticity T he efficacy of synaptic connections between neurons in the brain is not fixed, but it varies, depending on the firing frequency of presynaptic neurons (1, 2), the membrane potential of the postsynaptic neuron (3), spike timing (4-6), and intracellular parameters such as the calcium concentration; for a review see ref. 7. During the last decades, a large number of theoretical concepts and mathematical models have emerged that have helped to understand the functional consequences of synaptic modifications, in particular, long-term potentiation (LTP) and long-term depression (LTD) during development, learning, and memory; for reviews see (8)(9)(10). Apart from the work of Hebb (11), one of the most influential theoretical concepts has been the Bienenstock-Cooper-Munro (BCM) model originally developed to account for cortical organization and receptive field properties during development (12). The model predicted (i) regimes of both LTD and LTP, depending on the state of the postsynaptic neuron, and (ii) a sliding threshold that separates the two regimes. Both predictions i and ii have subsequently been confirmed experimentally (2,13,14).In this paper, we construct a bridge between the BCM model and a seemingly unconnected line of research in theoretical neuroscience centered around the concept of optimality.There are indications that several components of neural systems show close to optimal performance (15-17). Instead of looking at a specific implementation of synaptic changes, defined by a rule such as in the BCM model, we therefore ask what would be the optimal synaptic update rule so as to guarantee that a spiking neuron transmits as much information as possible? Information theoretic concepts have been used by several researchers because they allow to compare performance of neural systems with a fundamental theoretical limit (16, 17), but optimal synaptic update rules have so far been mostly restricted to a pure r...
Storing and recalling spiking sequences is a general problem the brain needs to solve. It is, however, unclear what type of biologically plausible learning rule is suited to learn a wide class of spatiotemporal activity patterns in a robust way. Here we consider a recurrent network of stochastic spiking neurons composed of both visible and hidden neurons. We derive a generic learning rule that is matched to the neural dynamics by minimizing an upper bound on the Kullback-Leibler divergence from the target distribution to the model distribution. The derived learning rule is consistent with spike-timing dependent plasticity in that a presynaptic spike preceding a postsynaptic spike elicits potentiation while otherwise depression emerges. Furthermore, the learning rule for synapses that target visible neurons can be matched to the recently proposed voltage-triplet rule. The learning rule for synapses that target hidden neurons is modulated by a global factor, which shares properties with astrocytes and gives rise to testable predictions.
Neuropathic pain caused by peripheral nerve injury is a debilitating neurological condition of high clinical relevance. On the cellular level, the elevated pain sensitivity is induced by plasticity of neuronal function along the pain pathway. Changes in cortical areas involved in pain processing contribute to the development of neuropathic pain. Yet, it remains elusive which plasticity mechanisms occur in cortical circuits. We investigated the properties of neural networks in the anterior cingulate cortex (ACC), a brain region mediating affective responses to noxious stimuli. We performed multiple whole-cell recordings from neurons in layer 5 (L5) of the ACC of adult mice after chronic constriction injury of the sciatic nerve of the left hindpaw and observed a striking loss of connections between excitatory and inhibitory neurons in both directions. In contrast, no significant changes in synaptic efficacy in the remaining connected pairs were found. These changes were reflected on the network level by a decrease in the mEPSC and mIPSC frequency. Additionally, nerve injury resulted in a potentiation of the intrinsic excitability of pyramidal neurons, whereas the cellular properties of interneurons were unchanged. Our set of experimental parameters allowed constructing a neuronal network model of L5 in the ACC, revealing that the modification of inhibitory connectivity had the most profound effect on increased network activity. Thus, our combined experimental and modeling approach suggests that cortical disinhibition is a fundamental pathological modification associated with peripheral nerve damage. These changes at the cortical network level might therefore contribute to the neuropathic pain condition.
The trajectory of the somatic membrane potential of a cortical neuron exactly reflects the computations performed on its afferent inputs. However, the spikes of such a neuron are a very low-dimensional and discrete projection of this continually evolving signal. We explored the possibility that the neuron’s efferent synapses perform the critical computational step of estimating the membrane potential trajectory from the spikes. We found that short-term changes in synaptic efficacy can be interpreted as implementing an optimal estimator of this trajectory. Short-term depression arose when presynaptic spiking was sufficiently intense as to reduce the uncertainty associated with the estimate; short-term facilitation reflected structural features of the statistics of the presynaptic neuron such as up and down states. Our analysis provides a unifying account of a powerful, but puzzling, form of plasticity.
We studied the hypothesis that synaptic dynamics is controlled by three basic principles: (1) synapses adapt their weights so that neurons can effectively transmit information, (2) homeostatic processes stabilize the mean firing rate of the postsynaptic neuron, and (3) weak synapses adapt more slowly than strong ones, while maintenance of strong synapses is costly. Our results show that a synaptic update rule derived from these principles shares features, with spike-timing-dependent plasticity, is sensitive to correlations in the input and is useful for synaptic memory. Moreover, input selectivity (sharply tuned receptive fields) of postsynaptic neurons develops only if stimuli with strong features are presented. Sharply tuned neurons can coexist with unselective ones, and the distribution of synaptic weights can be unimodal or bimodal. The formulation of synaptic dynamics through an optimality criterion provides a simple graphical argument for the stability of synapses, necessary for synaptic memory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.