Spontaneous cortical population activity exhibits a multitude of oscillatory patterns, which often display synchrony during slow-wave sleep or under certain anesthetics and stay asynchronous during quiet wakefulness. The mechanisms behind these cortical states and transitions among them are not completely understood. Here we study spontaneous population activity patterns in random networks of spiking neurons of mixed types modeled by Izhikevich equations. Neurons are coupled by conductance-based synapses subject to synaptic noise. We localize the population activity patterns on the parameter diagram spanned by the relative inhibitory synaptic strength and the magnitude of synaptic noise. In absence of noise, networks display transient activity patterns, either oscillatory or at constant level. The effect of noise is to turn transient patterns into persistent ones: for weak noise, all activity patterns are asynchronous non-oscillatory independently of synaptic strengths; for stronger noise, patterns have oscillatory and synchrony characteristics that depend on the relative inhibitory synaptic strength. In the region of parameter space where inhibitory synaptic strength exceeds the excitatory synaptic strength and for moderate noise magnitudes networks feature intermittent switches between oscillatory and quiescent states with characteristics similar to those of synchronous and asynchronous cortical states, respectively. We explain these oscillatory and quiescent patterns by combining a phenomenological global description of the network state with local descriptions of individual neurons in their partial phase spaces. Our results point to a bridge from events at the molecular scale of synapses to the cellular scale of individual neurons to the collective scale of neuronal populations.Electronic supplementary materialThe online version of this article (10.1007/s10827-018-0688-6) contains supplementary material, which is available to authorized users.
In a neuron with hyperpolarization activated current (I_{h}), the correct input frequency leads to an enhancement of the output response. This behavior is known as resonance and is well described by the neuronal impedance. In a simple neuron model we derive equations for the neuron's resonance and we link its frequency and existence with the biophysical properties of I_{h}. For a small voltage change, the component of the ratio of current change to voltage change (dI/dV) due to the voltage-dependent conductance change (dg/dV) is known as derivative conductance (G_{h}^{Der}). We show that both G_{h}^{Der} and the current activation kinetics (characterized by the activation time constant τ_{h}) are mainly responsible for controlling the frequency and existence of resonance. The increment of both factors (G_{h}^{Der} and τ_{h}) greatly contributes to the appearance of resonance. We also demonstrate that resonance is voltage dependent due to the voltage dependence of G_{h}^{Der}. Our results have important implications and can be used to predict and explain resonance properties of neurons with the I_{h} current.
stimuli and contributes to signal propagation, neural coding, and dynamic stability. It also plays an important role in cognitive processes. In this work, by means of studying intracellular recordings from CA1 neurons in rats and results from numerical simulations, we demonstrate that self-sustained activity presents high variability of patterns, such as low neural firing rates and activity in the form of small-bursts in distinct neurons. In our numerical simulations, we consider random networks composed of coupled, adaptive exponential integrate-and-fire neurons. The neural dynamics in the random networks simulates regular spiking (excitatory) and fast spiking (inhibitory) neurons. We show that both the connection probability and network size are fundamental properties that give rise to self-sustained activity in qualitative agreement with our experimental results. Finally, we provide a more detailed description of self-sustained activity in terms of lifetime distributions, synaptic conductances, and synaptic currents.
The conventional impedance profile of a neuron can identify the presence of resonance and other properties of the neuronal response to oscillatory inputs, such as nonlinear response amplifications, but it cannot distinguish other nonlinear properties such as asymmetries in the shape of the voltage response envelope. Experimental observations have shown that the response of neurons to oscillatory inputs preferentially enhances either the upper or lower part of the voltage envelope in different frequency bands. These asymmetric voltage responses arise in a neuron model when it is submitted to high enough amplitude oscillatory currents of variable frequencies. We show how the nonlinearities associated to different ionic currents or present in the model as captured by its voltage equation lead to asymmetrical response and how high amplitude oscillatory currents emphasize this response. We propose a geometrical explanation for the phenomenon where asymmetries result not only from nonlinearities in their activation curves but also from nonlinearites captured by the nullclines in the phase-plane diagram and from the system's time-scale separation. In addition, we identify an unexpected frequency-dependent pattern which develops in the gating variables of these currents and is a product of strong nonlinearities in the system as we show by controlling such behavior by manipulating the activation curve parameters. The results reported in this paper shed light on the ionic mechanisms by which brain embedded neurons process oscillatory information.
autores contribuíram igualmente na escrita e elaboração do presente trabalho.Físicas e físicos têm começado a trabalhar em áreas onde é necessária a análise de sinais ruidosos. Nessas áreas, tais como a Economia, a Neurociência e a Física, a noção de causalidade deve ser interpretada como uma medida estatística. Introduzimos ao leitor leigo a causalidade de Granger entre duas séries temporais e ilustramos como calculá-la: um sinal X "Granger-causa" Y se a observação do passado de X aumenta a previsibilidade do futuro de Y em comparação com o que é possível prever apenas pela observação do passado de Y . Em outras palavras, para haver causalidade de Granger entre dois sinais basta que a informação do passado de um melhore a previsão do futuro de outro, mesmo na ausência de mecanismos físicos de interação. Apresentamos a derivação da causalidade de Granger nos domínios do tempo e da frequência e damos exemplos numéricos através de um método nãoparamétrico no domínio da frequência. Métodos paramétricos são abordados no Apêndice. Discutimos limitações e aplicações desse método e outras alternativas para medir causalidade. Palavras-chave: Causalidade de Granger, processo autoregressivo, causalidade de Granger condicional, estimação não-paramétrica Physicists are starting to work in areas where noisy signal analysis is required. In these fields, such as Economics, Neuroscience, and Physics, the notion of causality should be interpreted as a statistical measure. We introduce to the lay reader the Granger causality between two time series and illustrate ways of calculating it: a signal X "Granger-causes" a signal Y if the observation of the past of X increases the predictability of the future of Y when compared to the same prediction done with the past of Y alone. In other words, for Granger causality between two quantities it suffices that information extracted from the past of one of them improves the forecast of the future of the other, even in the absence of any physical mechanism of interaction. We present derivations of the Granger causality measure in the time and frequency domains and give numerical examples using a non-parametric estimation method in the frequency domain. Parametric methods are addressed in the Appendix. We discuss the limitations and applications of this method and other alternatives to measure causality.
In network models of spiking neurons, the coupled impact of network structure and synaptic parameters on activity propagation is still an open problem. For spiking networks with hierarchical modular topology, we show that slow spike-train fluctuations emerge due to the increase of either the global synaptic strength parameter or the network hierarchical level, while the network size remains constant. Through an information-theoretical approach we show that information propagation of activity among adjacent modules is enhanced as the number of modules increases until an optimal value is reached and then decreases. This suggests that there is an optimal interplay between hierarchical level and synaptic strengths for information propagation among modules, but we also found that information transfer measured from the spike-trains differs from this one indicating that modular organization restructures information communicated in the mesoscopic level. By examining the increase of synaptic strengths and number of modules we find that the network behavior changes following different mechanisms: (1) increase of autocorrelations among individual neurons, and (2) increase of cross-correlations among pairs of neurons, respectively. The latter being better for information propagation. Our results have important implications and suggest roles that link topological features and synaptic levels to the transmission of information in cortical networks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.