Abstract. In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes via gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of up and down regulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike-timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depends on how constraints are implemented in the optimization problem. Two different constraints, i.e., control of postsynaptic rates or control of temporal locality, are studied. The relation of our results to Spike-Timing Dependent Plasticity (STDP) and reinforcement learning is discussed. In timing-based neural codes, neurons have to emit action potentials at precise moments in time. We use a supervised learning paradigm to derive a synaptic update rule that optimizes via gradient ascent the likelihood of postsynaptic firing at one or several desired firing times. We find that the optimal strategy of upand downregulating synaptic efficacies depends on the relative timing between presynaptic spike arrival and desired postsynaptic firing. If the presynaptic spike arrives before the desired postsynaptic spike timing, our optimal learning rule predicts that the synapse should become potentiated. The dependence of the potentiation on * 1 spike timing directly reflects the time course of an excitatory postsynaptic potential. However, our approach gives no unique reason for synaptic depression under reversed spike-timing. In fact, the presence and amplitude of depression of synaptic efficacies for reversed spike timing depends on how constraints are implemented in the optimization problem. Two different constraints, i.e., control of postsynaptic rates or control of temporal locality, are studied.The relation of our results to Spike-Timing Dependent Plasticity (STDP) and reinforcement learning is discussed.
We summarize here the results presented and subsequent discussion from the meeting on Integrating Hebbian and Homeostatic Plasticity at the Royal Society in April 2016. We first outline the major themes and results presented at the meeting. We next provide a synopsis of the outstanding questions that emerged from the discussion at the end of the meeting and finally suggest potential directions of research that we believe are most promising to develop an understanding of how these two forms of plasticity interact to facilitate functional changes in the brain.This article is part of the themed issue 'Integrating Hebbian and homeostatic plasticity'.
We study a computational model of audiovisual integration by setting a Bayesian observer that localizes visual and auditory stimuli without presuming the binding of audiovisual information. The observer adopts the maximum a posteriori approach to estimate the physically delivered position or timing of presented stimuli, simultaneously judging whether they are from the same source or not. Several experimental results on the perception of spatial unity and the ventriloquism effect can be explained comprehensively if the subjects in the experiments are regarded as Bayesian observers who try to accurately locate the stimulus. Moreover, by adaptively changing the inner representation of the Bayesian observer in terms of experience, we show that our model reproduces perceived spatial frame shifts due to the audiovisual adaptation known as the ventriloquism aftereffect.
Maximization of information transmission by a spiking-neuron model predicts changes of synaptic connections that depend on timing of pre-and postsynaptic spikes and on the postsynaptic membrane potential. Under the assumption of Poisson firing statistics, the synaptic update rule exhibits all of the features of the Bienenstock-Cooper-Munro rule, in particular, regimes of synaptic potentiation and depression separated by a sliding threshold. Moreover, the learning rule is also applicable to the more realistic case of neuron models with refractoriness, and is sensitive to correlations between input spikes, even in the absence of presynaptic rate modulation. The learning rule is found by maximizing the mutual information between presynaptic and postsynaptic spike trains under the constraint that the postsynaptic firing rate stays close to some target firing rate. An interpretation of the synaptic update rule in terms of homeostatic synaptic processes and spiketiming-dependent plasticity is discussed.computational neuroscience ͉ information theory ͉ learning ͉ spiking-neuron model ͉ synaptic plasticity T he efficacy of synaptic connections between neurons in the brain is not fixed, but it varies, depending on the firing frequency of presynaptic neurons (1, 2), the membrane potential of the postsynaptic neuron (3), spike timing (4-6), and intracellular parameters such as the calcium concentration; for a review see ref. 7. During the last decades, a large number of theoretical concepts and mathematical models have emerged that have helped to understand the functional consequences of synaptic modifications, in particular, long-term potentiation (LTP) and long-term depression (LTD) during development, learning, and memory; for reviews see (8)(9)(10). Apart from the work of Hebb (11), one of the most influential theoretical concepts has been the Bienenstock-Cooper-Munro (BCM) model originally developed to account for cortical organization and receptive field properties during development (12). The model predicted (i) regimes of both LTD and LTP, depending on the state of the postsynaptic neuron, and (ii) a sliding threshold that separates the two regimes. Both predictions i and ii have subsequently been confirmed experimentally (2,13,14).In this paper, we construct a bridge between the BCM model and a seemingly unconnected line of research in theoretical neuroscience centered around the concept of optimality.There are indications that several components of neural systems show close to optimal performance (15-17). Instead of looking at a specific implementation of synaptic changes, defined by a rule such as in the BCM model, we therefore ask what would be the optimal synaptic update rule so as to guarantee that a spiking neuron transmits as much information as possible? Information theoretic concepts have been used by several researchers because they allow to compare performance of neural systems with a fundamental theoretical limit (16, 17), but optimal synaptic update rules have so far been mostly restricted to a pure r...
Summary Hebbian and homeostatic plasticity together refine neural circuitry, but their interactions are unclear. In most existing models, each form of plasticity directly modifies synaptic strength. Equilibrium is reached when the two are inducing equal and opposite changes. We show that such models cannot reproduce ocular dominance plasticity (ODP) because negative feedback from the slow homeostatic plasticity observed in ODP cannot stabilize the positive feedback of fast Hebbian plasticity. We propose a new model in which synaptic strength is the product of a synapse-specific Hebbian factor and a postsynaptic-cell-specific homeostatic factor, with each factor separately arriving at a stable inactive state. This model captures ODP dynamics and has plausible biophysical substrates. We experimentally confirm model predictions that plasticity is inactive at stable states and that synaptic strength overshoots during recovery from visual deprivation. These results highlight the importance of multiple regulatory pathways for interactions of plasticity mechanisms operating over separate timescales.
Summary What causes critical periods (CPs) to open? For the best-studied case, ocular dominance plasticity in primary visual cortex in response to monocular deprivation (MD), the maturation of inhibition is necessary and sufficient. How does inhibition open the CP? We present a novel theory: the transition from pre-CP to CP plasticity arises because inhibition preferentially suppresses responses to spontaneous relative to visually-driven input activity, switching learning cues from internal to external sources. This differs from previous proposals in (1) arguing that the CP can open without changes in plasticity mechanisms when, through circuit development, activity patterns become more sensitive to sensory experience; (2) explaining not simply a transition from no plasticity to plasticity, but rather the change in outcome of MD-induced plasticity,from pre-CP to CP,. More broadly, hierarchical organization of sensory-motor pathways may develop through a cascade of CPs induced as circuit maturation progresses from “lower” to “higher” cortical areas.
Randomly connected networks of neurons exhibit a transition from fixed-point to chaotic activity as the variance of their synaptic connection strengths is increased. In this study, we analytically evaluate how well a small external input can be reconstructed from a sparse linear readout of network activity. At the transition point, known as the edge of chaos, networks display a number of desirable features, including large gains and integration times. Away from this edge, in the nonchaotic regime that has been the focus of most models and studies, gains and integration times fall off dramatically, which implies that parameters must be fine tuned with considerable precision if high performance is required. Here we show that, near the edge, decoding performance is characterized by a critical exponent that takes a different value on the two sides. As a result, when the network units have an odd saturating nonlinear response function, the falloff in gains and integration times is much slower on the chaotic side of the transition. This means that, under appropriate conditions, good performance can be achieved with less fine tuning beyond the edge, within the chaotic regime.
An animal's awareness of its location in space depends on the activity of place cells in the hippocampus. How the brain encodes the spatial position of others has not yet been identified. We investigated neuronal representations of other animals' locations in the dorsal CA1 region of the hippocampus with an observational T-maze task in which one rat was required to observe another rat's trajectory to successfully retrieve a reward. Information reflecting the spatial location of both the self and the other was jointly and discretely encoded by CA1 pyramidal cells in the observer rat. A subset of CA1 pyramidal cells exhibited spatial receptive fields that were identical for the self and the other. These findings demonstrate that hippocampal spatial representations include dimensions for both self and nonself.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.