Gamma oscillations are believed to play a critical role in in information processing, encoding, and retrieval. Inhibitory interneuronal network gamma (ING) oscillations may arise from a coupled oscillator mechanism in which individual neurons oscillate or from a population oscillator in which individual neurons fire sparsely and stochastically. All ING mechanisms, including the one proposed herein, rely on alternating waves of inhibition and windows of opportunity for spiking. The coupled oscillator model implemented with Wang-Buzsáki model neurons is not sufficiently robust to heterogeneity in excitatory drive, and therefore intrinsic frequency, to account for in vitro models of ING. Similarly, in a tightly synchronized regime, the stochastic population oscillator model is often characterized by sparse firing, whereas interneurons both in vivo and in vitro do not fire sparsely during gamma, but rather on average every other cycle. We substituted so-called resonator neural models, which exhibit class 2 excitability and postinhibitory rebound (PIR), for the integrators that are typically used. This results in much greater robustness to heterogeneity that actually increases as the average participation in spikes per cycle approximates physiological levels. Moreover, dynamic clampexperimentsthatshowautapse-inducedfiringinentorhinalcorticalinterneuronssupporttheideathatPIRcanserveasanetworkgamma mechanism. Furthermore, parvalbumin-positive (PV ϩ ) cells were much more likely to display both PIR and autapse-induced firing than GAD2 ϩ cells, supporting the view that PV ϩ fast-firing basket cells are more likely to exhibit class 2 excitability than other types of inhibitory interneurons.
Numerical simulations of brain networks are a critical part of our efforts in understanding brain functions under pathological and normal conditions. For several decades, the community has developed many software packages and simulators to accelerate research in computational neuroscience. In this article, we select the three most popular simulators, as determined by the number of models in the ModelDB database, such as NEURON, GENESIS, and BRIAN, and perform an independent evaluation of these simulators. In addition, we study NEST, one of the lead simulators of the Human Brain Project. First, we study them based on one of the most important characteristics, the range of supported models. Our investigation reveals that brain network simulators may be biased toward supporting a specific set of models. However, all simulators tend to expand the supported range of models by providing a universal environment for the computational study of individual neurons and brain networks. Next, our investigations on the characteristics of computational architecture and efficiency indicate that all simulators compile the most computationally intensive procedures into binary code, with the aim of maximizing their computational performance. However, not all simulators provide the simplest method for module development and/or guarantee efficient binary code. Third, a study of their amenability for high-performance computing reveals that NEST can almost transparently map an existing model on a cluster or multicore computer, while NEURON requires code modification if the model developed for a single computer has to be mapped on a computational cluster. Interestingly, parallelization is the weakest characteristic of BRIAN, which provides no support for cluster computations and limited support for multicore computers. Fourth, we identify the level of user support and frequency of usage for all simulators. Finally, we carry out an evaluation using two case studies: a large network with simplified neural and synaptic models and a small network with detailed models. These two case studies allow us to avoid any bias toward a particular software package. The results indicate that BRIAN provides the most concise language for both cases considered. Furthermore, as expected, NEST mostly favors large network models, while NEURON is better suited for detailed models. Overall, the case studies reinforce our general observation that simulators have a bias in the computational performance toward specific types of the brain network models.
We show how to predict whether a neural network will exhibit global synchrony (a one-cluster state) or a two-cluster state based on the assumption of pulsatile coupling and critically dependent upon the phase response curve (PRC) generated by the appropriate perturbation from a partner cluster. Our results hold for a monotonically increasing (meaning longer delays as the phase increases) PRC, which likely characterizes inhibitory fast-spiking basket and cortical low-threshold-spiking interneurons in response to strong inhibition. Conduction delays stabilize synchrony for this PRC shape, whereas they destroy two-cluster states, the former by avoiding a destabilizing discontinuity and the latter by approaching it. With conduction delays, stronger coupling strength can promote a one-cluster state, so the weak coupling limit is not applicable here. We show how jitter can destabilize global synchrony but not a two-cluster state. Local stability of global synchrony in an all-to-all network does not guarantee that global synchrony can be observed in an appropriately scaled sparsely connected network; the basin of attraction can be inferred from the PRC and must be sufficiently large. Two-cluster synchrony is not obviously different from one-cluster synchrony in the presence of noise and may be the actual substrate for oscillations observed in the local field potential (LFP) and the electroencephalogram (EEG) in situations where global synchrony is not possible. Transitions between cluster states may change the frequency of the rhythms observed in the LFP or EEG. Transitions between cluster states within an inhibitory subnetwork may allow more effective recruitment of pyramidal neurons into the network rhythm. NEW & NOTEWORTHY We show that jitter induced by sparse connectivity can destabilize global synchrony but not a two-cluster state with two smaller clusters firing alternately. On the other hand, conduction delays stabilize synchrony and destroy two-cluster states. These results hold if each cluster exhibits a phase response curve similar to one that characterizes fast-spiking basket and cortical low-threshold-spiking cells for strong inhibition. Either a two-cluster or a one-cluster state might provide the oscillatory substrate for neural computations.
We address how feedback to a bursting biological pacemaker with intrinsic variability in cycle length can affect that variability. Specifically, we examine a hybrid circuit constructed of an isolated crab anterior burster (AB)/pyloric dilator (PD) pyloric pacemaker receiving virtual feedback via dynamic clamp. This virtual feedback generates artificial synaptic input to PD with timing determined by adjustable phase response dynamics that mimic average burst intervals generated by the lateral pyloric neuron (LP) in the intact pyloric network. Using this system, we measure network period variability dependence on the feedback element's phase response dynamics and find that a constant response interval confers minimum variability. We further find that these optimal dynamics are characteristic of the biological pyloric network. Building upon our previous theoretical work mapping the firing intervals in one cycle onto the firing intervals in the next cycle, we create a theoretical map of the distribution of all firing intervals in one cycle to the distribution of firing intervals in the next cycle. We then obtain an integral equation for a stationary self-consistent distribution of the network periods of the hybrid circuit, which can be solved numerically given the uncoupled pacemaker's distribution of intrinsic periods, the nature of the network's feedback, and the phase resetting characteristics of the pacemaker. The stationary distributions obtained in this manner are strongly predictive of the experimentally observed distributions of hybrid network period. This theoretical framework can provide insight into optimal feedback schemes for minimizing variability to increase reliability or maximizing variability to increase flexibility in central pattern generators driven by pacemakers with feedback.
The synchronization tendencies of networks of oscillators have been studied intensely. We assume a network of all-to-all pulse-coupled oscillators in which the effect of a pulse is independent of the number of oscillators that simultaneously emit a pulse and the normalized delay (the phase resetting) is a monotonically increasing function of oscillator phase with the slope everywhere less than one and a value greater than 2φ−1, where φ is the normalized phase. Order switching cannot occur; the only possible solutions are globally attracting synchrony and cluster solutions with a fixed firing order. For small conduction delays, we prove the former stable and all other possible attractors nonexistent due to the destabilizing discontinuity of the phase resetting at a phase of 0.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.