What fundamental properties of synaptic connectivity in the neocortex stem from the ongoing dynamics of synaptic changes? In this study, we seek to find the rules shaping the stationary distribution of synaptic efficacies in the cortex. To address this question, we combined chronic imaging of hundreds of spines in the auditory cortex of mice in vivo over weeks with modeling techniques to quantitatively study the dynamics of spines, the morphological correlates of excitatory synapses in the neocortex. We found that the stationary distribution of spine sizes of individual neurons can be exceptionally well described by a log-normal function. We furthermore show that spines exhibit substantial volatility in their sizes at timescales that range from days to months. Interestingly, the magnitude of changes in spine sizes is proportional to the size of the spine. Such multiplicative dynamics are in contrast with conventional models of synaptic plasticity, learning, and memory, which typically assume additive dynamics. Moreover, we show that the ongoing dynamics of spine sizes can be captured by a simple phenomenological model that operates at two timescales of days and months. This model converges to a log-normal distribution, bridging the gap between synaptic dynamics and the stationary distribution of synaptic efficacies.
A persistent change in neuronal activity after brief stimuli is a common feature of many neuronal microcircuits. This persistent activity can be sustained by ongoing reverberant network activity or by the intrinsic biophysical properties of individual cells. Here we demonstrate that rat and guinea pig cerebellar Purkinje cells in vivo show bistability of membrane potential and spike output on the time scale of seconds. The transition between membrane potential states can be bidirectionally triggered by the same brief current pulses. We also show that sensory activation of the climbing fiber input can switch Purkinje cells between the two states. The intrinsic nature of Purkinje cell bistability and its control by sensory input can be explained by a simple biophysical model. Purkinje cell bistability may have a key role in the short-term processing and storage of sensory information in the cerebellar cortex.
Delayed comparison tasks are widely used in the study of working memory and perception in psychology and neuroscience. It has long been known, however, that decisions in these tasks are biased. When the two stimuli in a delayed comparison trial are small in magnitude, subjects tend to report that the first stimulus is larger than the second stimulus. In contrast, subjects tend to report that the second stimulus is larger than the first when the stimuli are relatively large. Here we study the computational principles underlying this bias, also known as the contraction bias. We propose that the contraction bias results from a Bayesian computation in which a noisy representation of a magnitude is combined with a-priori information about the distribution of magnitudes to optimize performance. We test our hypothesis on choice behavior in a visual delayed comparison experiment by studying the effect of (i) changing the prior distribution and (ii) changing the uncertainty in the memorized stimulus. We show that choice behavior in both manipulations is consistent with the performance of an observer who uses a Bayesian inference in order to improve performance. Moreover, our results suggest that the contraction bias arises during memory retrieval/decision making and not during memory encoding. These results support the notion that the contraction bias illusion can be understood as resulting from optimality considerations.
There is accumulating evidence that prior knowledge about expectations plays an important role in perception. The Bayesian framework is the standard computational approach to explain how prior knowledge about the distribution of expected stimuli is incorporated with noisy observations in order to improve performance. However, it is unclear what information about the prior distribution is acquired by the perceptual system over short periods of time and how this information is utilized in the process of perceptual decision making. Here we address this question using a simple two-tone discrimination task. We find that the “contraction bias”, in which small magnitudes are overestimated and large magnitudes are underestimated, dominates the pattern of responses of human participants. This contraction bias is consistent with the Bayesian hypothesis in which the true prior information is available to the decision-maker. However, a trial-by-trial analysis of the pattern of responses reveals that the contribution of most recent trials to performance is overweighted compared with the predictions of a standard Bayesian model. Moreover, we study participants' performance in a-typical distributions of stimuli and demonstrate substantial deviations from the ideal Bayesian detector, suggesting that the brain utilizes a heuristic approximation of the Bayesian inference. We propose a biologically plausible model, in which decision in the two-tone discrimination task is based on a comparison between the second tone and an exponentially-decaying average of the first tone and past tones. We show that this model accounts for both the contraction bias and the deviations from the ideal Bayesian detector hypothesis. These findings demonstrate the power of Bayesian-like heuristics in the brain, as well as their limitations in their failure to fully adapt to novel environments.
Recent experiments demonstrate substantial volatility of excitatory connectivity in the absence of any learning. This challenges the hypothesis that stable synaptic connections are necessary for long-term maintenance of acquired information. Here we measure ongoing synaptic volatility and use theoretical modeling to study its consequences on cortical dynamics. We show that in the balanced cortex, patterns of neural activity are primarily determined by inhibitory connectivity, despite the fact that most synapses and neurons are excitatory. Similarly, we show that the inhibitory network is more effective in storing memory patterns than the excitatory one. As a result, network activity is robust to ongoing volatility of excitatory synapses, as long as this volatility does not disrupt the balance between excitation and inhibition. We thus hypothesize that inhibitory connectivity, rather than excitatory, controls the maintenance and loss of information over long periods of time in the volatile cortex.
The probability of choosing an alternative in a long sequence of repeated choices is proportional to the total reward derived from that alternative, a phenomenon known as Herrnstein's matching law. This behavior is remarkably conserved across species and experimental conditions, but its underlying neural mechanisms still are unknown. Here, we propose a neural explanation of this empirical law of behavior. We hypothesize that there are forms of synaptic plasticity driven by the covariance between reward and neural activity and prove mathematically that matching is a generic outcome of such plasticity. Two hypothetical types of synaptic plasticity, embedded in decision-making neural network models, are shown to yield matching behavior in numerical simulations, in accord with our general theorem. We show how this class of models can be tested experimentally by making reward not only contingent on the choices of the subject but also directly contingent on fluctuations in neural activity. Maximization is shown to be a generic outcome of synaptic plasticity driven by the sum of the covariances between reward and all past neural activities.neuroeconomics ͉ decision making ͉ rational choice theory ͉ reinforcement learning
The ability to represent time is an essential component of cognition but its neural basis is unknown. Although extensively studied both behaviorally and electrophysiologically, a general theoretical framework describing the elementary neural mechanisms used by the brain to learn temporal representations is lacking. It is commonly believed that the underlying cellular mechanisms reside in high order cortical regions but recent studies show sustained neural activity in primary sensory cortices that can represent the timing of expected reward. Here, we show that local cortical networks can learn temporal representations through a simple framework predicated on reward dependent expression of synaptic plasticity. We assert that temporal representations are stored in the lateral synaptic connections between neurons and demonstrate that reward-modulated plasticity is sufficient to learn these representations. We implement our model numerically to explain reward-time learning in the primary visual cortex (V1), demonstrate experimental support, and suggest additional experimentally verifiable predictions. reinforcment learning ͉ visual cortex O ur brains process time with such instinctual ease that the difficulty of defining what time is, in a neural sense, seems paradoxical. There is a rich literature in experimental neuroscience describing the temporal dynamics of both cellular and system-level neuronal processes and many insightful psychophysical studies have revealed perceptual correlates of time. Despite this, and the clear importance of accurate temporal processing at all levels of behavior, we still know little about how time is represented or used by the brain (1). Temporal processing is classically understood as a higher order function, and although there is some disagreement (2, 3), it is often argued that dedicated structures or regions in the brain are responsible for representing time (4). Because different mechanisms are likely responsible for computing timing at different time scales (1,5,6), and because there is evidence for modality specific temporal mechanisms (7), an alternative possibility is that timing processes develop locally within different brain regions.Recent evidence indicates that temporal representations are expressed in primary sensory cortices (8-10) and that rewardbased reinforcement can affect the form of stimulus driven activity in the primary somatosensory cortex (11-13). In particular, Shuler and Bear (9) showed that neurons in rat primary visual cortex can develop persistent activity, evoked by brief visual stimuli, that robustly represents the temporal interval between a visual stimulus and paired reward (Fig. 1). A mechanistic framework capable of describing how a neural substrate can learn the observed temporal representations does not exist.Here, we explain how these temporal signals can be encoded in recurrent excitatory synaptic connections and how a local network can learn specific temporal instantiations through reward modulated plasticity. Although our model is potentially ...
Does this dramatic difference in physiology imply a difference in function? Not necessarily. We show that the physiological properties of the VL neurons, particularly the linear input-output relations of the intermediate layer neurons, allow the two different networks to perform the same computation. The convergence of different networks to the same computational capacity indicates that it is the computation, not the specific properties of the network, that is self-organized or selected for by evolutionary pressure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.