The attractor neural network scenario is a popular scenario for memory storage in the association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in the inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting that learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exist two types of retrieval states: one in which firing rates are constant in time and another in which firing rates fluctuate chaotically.
Sequential activity has been observed in multiple neuronal circuits across species, neural structures, and behaviors. It has been hypothesized that sequences could arise from learning processes. However, it is still unclear whether biologically plausible synaptic plasticity rules can organize neuronal activity to form sequences whose statistics match experimental observations. Here, we investigate temporally asymmetric Hebbian rules in sparsely connected recurrent rate networks and develop a theory of the transient sequential activity observed after learning. These rules transform a sequence of random input patterns into synaptic weight updates. After learning, recalled sequential activity is reflected in the transient correlation of network activity with each of the stored input patterns. Using mean-field theory, we derive a low-dimensional description of the network dynamics and compute the storage capacity of these networks. Multiple temporal characteristics of the recalled sequential activity are consistent with experimental observations. We find that the degree of sparseness of the recalled sequences can be controlled by nonlinearities in the learning rule. Furthermore, sequences maintain robust decoding, but display highly labile dynamics, when synaptic connectivity is continuously modified due to noise or storage of other patterns, similar to recent observations in hippocampus and parietal cortex. Finally, we demonstrate that our results also hold in recurrent networks of spiking neurons with separate excitatory and inhibitory populations.
The attractor neural network scenario is a popular scenario for memory storage in association cortex, but there is still a large gap between models based on this scenario and experimental data. We study a recurrent network model in which both learning rules and distribution of stored patterns are inferred from distributions of visual responses for novel and familiar images in inferior temporal cortex (ITC). Unlike classical attractor neural network models, our model exhibits graded activity in retrieval states, with distributions of firing rates that are close to lognormal. Inferred learning rules are close to maximizing the number of stored patterns within a family of unsupervised Hebbian learning rules, suggesting learning rules in ITC are optimized to store a large number of attractor states. Finally, we show that there exists two types of retrieval states: one in which firing rates are constant in time, another in which firing rates fluctuate chaotically.
The cortical amygdala receives direct olfactory inputs and is thought to participate in processing and learning of biologically relevant olfactory cues. As for other brain structures implicated in learning, the principal neurons of the anterior cortical nucleus (ACo) exhibit intrinsic subthreshold membrane potential oscillations in the θ-frequency range. Here we show that nearly 50% of ACo layer II neurons also display electrical resonance, consisting of selective responsiveness to stimuli of a preferential frequency (2–6 Hz). Their impedance profile resembles an electrical band-pass filter with a peak at the preferred frequency, in contrast to the low-pass filter properties of other neurons. Most ACo resonant neurons displayed frequency preference along the whole subthreshold voltage range. We used pharmacological tools to identify the voltage-dependent conductances implicated in resonance. A hyperpolarization-activated cationic current depending on HCN channels underlies resonance at resting and hyperpolarized potentials; notably, this current also participates in resonance at depolarized subthreshold voltages. KV7/KCNQ K+ channels also contribute to resonant behavior at depolarized potentials, but not in all resonant cells. Moreover, resonance was strongly attenuated after blockade of voltage-dependent persistent Na+ channels, suggesting an amplifying role. Remarkably, resonant neurons presented a higher firing probability for stimuli of the preferred frequency. To fully understand the mechanisms underlying resonance in these neurons, we developed a comprehensive conductance-based model including the aforementioned and leak conductances, as well as Hodgkin and Huxley-type channels. The model reproduces the resonant impedance profile and our pharmacological results, allowing a quantitative evaluation of the contribution of each conductance to resonance. It also replicates selective spiking at the resonant frequency and allows a prediction of the temperature-dependent shift in resonance frequency. Our results provide a complete characterization of the resonant behavior of olfactory amygdala neurons and shed light on a putative mechanism for network activity coordination in the intact brain.
Two strikingly distinct types of activity have been observed in various brain structures during delay periods of delayed response tasks: Persistent activity (PA), in which a sub-population of neurons maintains an elevated firing rate throughout an entire delay period; and Sequential activity (SA), in which sub-populations of neurons are activated sequentially in time. It has been hypothesized that both types of dynamics can be 'learned' by the relevant networks from the statistics of their inputs, thanks to mechanisms of synaptic plasticity. However, the necessary conditions for a synaptic plasticity rule and input statistics to learn these two types of dynamics in a stable fashion are still unclear. In particular, it is unclear whether a single learning rule is able to learn both types of activity patterns, depending on the statistics of the inputs driving the network. Here, we first characterize the complete bifurcation diagram of a firing rate model of multiple excitatory populations with an inhibitory mechanism, as a function of the parameters characterizing its connectivity. We then investigate how an unsupervised temporally asymmetric Hebbian plasticity rule shapes the dynamics of the network. Consistent with previous studies, we find that for stable learning of PA and SA, an additional stabilization mechanism, such as multiplicative homeostatic plasticity, is necessary. Using the bifurcation diagram derived for fixed connectivity, we study analytically the temporal evolution and the steady state of the learned recurrent architecture as a function of parameters characterizing the external inputs. Slow changing stimuli lead to PA, while fast changing stimuli lead to SA. Our network model shows how a network with plastic synapses can stably and flexibly learn PA and SA in an unsupervised manner.
Abstract:Conductance-based (CB) models are a class of high dimensional dynamical systems derived from biophysical principles to describe in detail the electrical dynamics of single neurons. Despite the high dimensionality of these models, the dynamics observed for realistic parameter values is generically planar and can be minimally described by two equations. In this work, we derive the conditions to have a Bogdanov-Takens (BT) bifurcation in CB models, and we argue that it is plausible that these conditions are verified for experimentally-sensible values of the parameters. We show numerically that the cubic BT normal form, a two-variable dynamical system, exhibits all of the diversity of bifurcations generically observed in single neuron models. We show that the Morris-Lecar model is approximately equivalent to the cubic Bogdanov-Takens normal form for realistic values of parameters. Furthermore, we explicitly calculate the quadratic coefficient of the BT normal form for a generic CB model, obtaining that by constraining the theoretical I-V curve's curvature to match experimental observations, the normal form appears to be naturally cubic. We propose the cubic BT normal form as a robust minimal model for single neuron dynamics that can be derived from biophysically-realistic CB models.
Natural animal behavior displays rich lexical and temporal dynamics, even in a stable environment. This implies that behavioral variability arises from sources within the brain, but the origin and mechanics of these processes remain largely unknown. Here, we focus on the observation that the timing of self-initiated actions shows large variability even when they are executed in stable, well-learned sequences. Could this mix of reliability and stochasticity arise within the same circuit? We trained rats to perform a stereotyped sequence of self-initiated actions and recorded neural ensemble activity in secondary motor cortex (M2), which is known to reflect trial-by-trial action timing fluctuations. Using hidden Markov models we established a robust and accurate dictionary between ensemble activity patterns and actions. We then showed that metastable attractors, representing activity patterns with the requisite combination of reliable sequential structure and high transition timing variability, could be produced by reciprocally coupling a high dimensional recurrent network and a low dimensional feedforward one. Transitions between attractors were generated by correlated variability arising from the feedback loop between the two networks. This mechanism predicted a specific structure of low-dimensional noise correlations that were empirically verified in M2 ensemble dynamics. This work suggests a robust network motif as a novel mechanism to support critical aspects of animal behavior and establishes a framework for investigating its circuit origins via correlated variability.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.