Neural circuits are able to perform computations under very diverse conditions and requirements. The required computations impose clear constraints on their fine-tuning: a rapid and maximally informative response to stimuli in general requires decorrelated baseline neural activity. Such network dynamics is known as asynchronous-irregular. In contrast, spatio-temporal integration of information requires maintenance and transfer of stimulus information over extended time periods. This can be realized at criticality, a phase transition where correlations, sensitivity and integration time diverge. Being able to flexibly switch, or even combine the above properties in a task-dependent manner would present a clear functional advantage. We propose that cortex operates in a “reverberating regime” because it is particularly favorable for ready adaptation of computational properties to context and task. This reverberating regime enables cortical networks to interpolate between the asynchronous-irregular and the critical state by small changes in effective synaptic strength or excitation-inhibition ratio. These changes directly adapt computational properties, including sensitivity, amplification, integration time and correlation length within the local network. We review recent converging evidence that cortex in vivo operates in the reverberating regime, and that various cortical areas have adapted their integration times to processing requirements. In addition, we propose that neuromodulation enables a fine-tuning of the network, so that local circuits can either decorrelate or integrate, and quench or maintain their input depending on task. We argue that this task-dependent tuning, which we call “dynamic adaptive computation,” presents a central organization principle of cortical networks and discuss first experimental evidence.
Here we provide detailed background information for our work on Bayesian inference of change-points in the spread of SARS-CoV-2 and the effectiveness of non-pharmaceutical interventions (Dehning et al., Science, 2020). We outline the general background of Bayesian inference and of SIR-like models. We explain the assumptions that underlie model-based estimates of the reproduction number and compare them to the assumptions that underlie model-free estimates, such as used in the Robert-Koch Institute situation reports. We highlight effects that originate from the two estimation approaches, and how they may cause differences in the inferred reproduction number. Furthermore, we explore the challenges that originate from data availability - such as publication delays and inconsistent testing - and explain their impact on the time-course of inferred case numbers. Along with alternative data sources, this allowed us to cross-check and verify our previous results.
To date, it is still impossible to sample the entire mammalian brain with single-neuron precision. This forces one to either use spikes (focusing on few neurons) or to use coarse-sampled activity (averaging over many neurons, e.g. LFP). Naturally, the sampling technique impacts inference about collective properties. Here, we emulate both sampling techniques on a spiking model to quantify how they alter observed correlations and signatures of criticality. We discover a general effect: when the inter-electrode distance is small, electrodes sample overlapping regions in space, which increases the correlation between the signals. For coarse-sampled activity, this can produce power-law distributions even for non-critical systems. In contrast, spikes enable one to distinguish the underlying dynamics. This explains why coarse measures and spikes have produced contradicting results in the past -that are now all consistent with a slightly subcritical regime. Introduction 1For more than two decades, it has been argued that the cor-2 tex might operate at a critical point [1][2][3][4][5][6]. The criticality hy-3 pothesis states that by operating at a critical point, neuronal 4 networks could benefit from optimal information-processing 5 properties. Properties maximized at criticality include the cor-6 relation length [7], the autocorrelation time [6], the dynamic 7 range [8] and the richness of spatio-temporal patterns [9, 10]. 8 Evidence for criticality in the brain often derives from mea-9 surements of neuronal avalanches. Neuronal avalanches are 10 cascades of neuronal activity that spread in space and time. If a 11 system is critical, the probability distribution of avalanche size 12 ( ) follows a power law ( ) ∼ − [7, 11]. Such power-13 law distributions have been observed repeatedly in experiments 14 since they were first reported by Beggs & Plenz in 2003 [1]. 15 However, not all experiments have produced power laws 16 and the criticality hypothesis remains controversial. It turns 17 out that results for cortical recordings in vivo differ systemati-18 cally: 19 Studies that use what we here call coarse-sampled activity 20 typically produce power-law distributions [1, 12-21]. In con-21 trast, studies that use sub-sampled activity typically do not [14, 22 22-26]. Coarse-sampled activity include LFP, M/EEG, fMRI 23 and potentially calcium imaging, while sub-sampled activity is 24 front-most spike recordings. We hypothesize that the apparent 25 contradiction between coarse-sampled (LFP-like) data and sub-26 sampled (spike) data can be explained by the differences in the 27 recording and analysis procedures.28In general, the analysis of neuronal avalanches is not 29 straightforward. In order to obtain avalanches, one needs to de-30 fine discrete events. While spikes are discrete events by nature, 31 a coarse-sampled signal has to be converted into a binary form. 32This conversion hinges on thresholding the signal, which can be 33 problematic [27][28][29][30]. Furthermore, events have to be grouped 34 into avalanches, and th...
Here we present our Python toolbox “MR. Estimator” to reliably estimate the intrinsic timescale from electrophysiologal recordings of heavily subsampled systems. Originally intended for the analysis of time series from neuronal spiking activity, our toolbox is applicable to a wide range of systems where subsampling—the difficulty to observe the whole system in full detail—limits our capability to record. Applications range from epidemic spreading to any system that can be represented by an autoregressive process. In the context of neuroscience, the intrinsic timescale can be thought of as the duration over which any perturbation reverberates within the network; it has been used as a key observable to investigate a functional hierarchy across the primate cortex and serves as a measure of working memory. It is also a proxy for the distance to criticality and quantifies a system’s dynamic working point.
A certain degree of inhibition is a common trait of dynamical networks in nature, ranging from neuronal and biochemical networks, to social and technological networks. We study here the role of inhibition in a representative dynamical network model, characterizing the dynamics of random threshold networks with both excitatory and inhibitory links. Varying the fraction of excitatory links has a strong effect on the network's population activity and its sensitivity to perturbation. The average degree K, known to have a strong effect on the dynamics when small, loses its influence on the dynamics as its value increases. Instead, the strength of inhibition is a determinant of dynamics and sensitivity here, allowing for criticality only in a specific corridor of inhibition. This criticality corridor requires that excitation dominates, while the balance region corresponds to maximum sensitivity to perturbation. We develop mean-field approximations of the population activity and sensitivity and find that the network dynamics is independent of degree distribution for high K.In a minimal model of an adaptive threshold network we demonstrate how the dynamics remains robust against changes in the topology. This adaptive model can be extended in order to generate networks with a controllable activity distribution and specific topologies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.