Understanding the origin, nature and functional significance of complex patterns of neural activity, as recorded by diverse electrophysiological and neuroimaging techniques, is a central challenge in Neuroscience. Such patterns include collective oscillations, emerging out of neural synchronization, as well as highly-heterogeneous outbursts of activity interspersed by periods of quiescence, called "neuronal avalanches". Much debate has been generated about the possible scale-invariance or criticality of such avalanches, and its relevance for brain function. Aimed at shedding light onto this, here we analyze the large-scale collective properties of the cortex by using a mesoscopic approach, following the principle of parsimony of Landau-Ginzburg. Our model is similar to that of Wilson-Cowan for neural dynamics but, crucially, including stochasticity and space; synaptic plasticity and inhibition are considered as possible regulatory mechanisms. Detailed analyses uncover a phase diagram including down-states, synchronous, asynchronous, and up-state phases, and reveal that empirical findings for neuronal avalanches are consistently reproduced by tuning our model to the edge of synchronization. This reveals that the putative criticality of cortical dynamics does not correspond to a quiescent-to-active phase transition, as usually assumed in theoretical approaches, but to a synchronization phase transition, at which incipient oscillations and scalefree avalanches coexist. Furthermore, our model also accounts for up and down states as they occur, e.g. during deep sleep. The present approach constitutes a framework to rationalize the possible collective phases and phase transitions of cortical networks in simple terms, thus helping shed light into basic aspects of brain functioning from a very broad perspective.Cortical dynamics | Neuronal avalanches | Criticality | Synaptic plasticity T he cerebral cortex exhibits spontaneous activity even in the absence of any task or external stimuli (1-3). A salient aspect of this, so-called, resting-state dynamics, as revealed by in vivo and in vitro measurements, is that it exhibits outbursts of electrochemical activity, characterized by brief episodes of coherence -during which many neurons fire within a narrow time window-interspaced by periods of relative quiescence, giving rise to collective oscillatory rhythms (4, 5). Shedding light on the origin, nature, and functional meaning of such an intricate dynamics is a fundamental challenge in Neuroscience (6).Upon experimentally enhancing the spatio-temporal resolution of activity recordings, Beggs and Plenz made the remarkable finding that, actually, synchronized outbursts of neural activity could be decomposed into complex spatio-temporal patterns, thereon named "neuronal avalanches" (7). The sizes and durations of such avalanches were reported to be distributed as power-laws, i.e. to be organized in a scale-free way, limited only by network size (7). Furthermore, they obey finite-size scaling (8), a trademark of scale invariance...
The spontaneous emergence of coherent behavior through synchronization plays a key role in neural function, and its anomalies often lie at the basis of pathologies. Here we employ a parsimonious (mesoscopic) approach to study analytically and computationally the synchronization (Kuramoto) dynamics on the actual human-brain connectome network. We elucidate the existence of a so-far-uncovered intermediate phase, placed between the standard synchronous and asynchronous phases, i.e. between order and disorder. This novel phase stems from the hierarchical modular organization of the connectome. Where one would expect a hierarchical synchronization process, we show that the interplay between structural bottlenecks and quenched intrinsic frequency heterogeneities at many different scales, gives rise to frustrated synchronization, metastability, and chimera-like states, resulting in a very rich and complex phenomenology. We uncover the origin of the dynamic freezing behind these features by using spectral graph theory and discuss how the emerging complex synchronization patterns relate to the need for the brain to access –in a robust though flexible way– a large variety of functional attractors and dynamical repertoires without ad hoc fine-tuning to a critical point.
We revisit the problem of deriving the mean-field values of avalanche exponents in systems with absorbing states. These are well known to coincide with those of unbiased branching processes. Here we show that for at least four different universality classes (directed percolation, dynamical percolation, the voter model or compact directed percolation class, and the Manna class of stochastic sandpiles) this common result can be obtained by mapping the corresponding Langevin equations describing each of them into a random walker confined to the origin by a logarithmic potential. We report on the emergence of nonuniversal continuously varying exponent values stemming from the presence of small external driving - that might induce avalanche merging - that, to the best of our knowledge, has not been noticed in the past. Many of the other results derived here appear in the literature as independently derived for individual universality classes or for the branching process itself. Still, we believe that a simple and unified perspective as the one presented here can help (1) clarify the overall picture, (2) underline the superuniversality of the behavior as well as the dependence on external driving, and (3) avoid the common existing confusion between unbiased branching processes (equivalent to a random walker in a balanced logarithmic potential) and standard (unconfined) random walkers.
Avalanches whose sizes and durations are distributed as power laws appear in many contexts, from physics to geophysics and biology. Here, we show that there is a hidden peril in thresholding continuous times series -either from empirical or synthetic data-for the identification of avalanches. In particular, we consider two possible alternative definitions of avalanche size used e.g. in the empirical determination of avalanche exponents in the analysis of neural-activity data. By performing analytical and computational studies of an Ornstein-Uhlenbeck process (taken as a guiding example) we show that (i) if relatively large threshold values are employed to determine the beginning and ending of avalanches and (ii) if -as sometimes done in the literature-avalanche sizes are defined as the total area (above zero) of the avalanche, then true asymptotic scaling behavior is not seen, instead the observations are dominated by transient effects. This problem -that we have detected in some recent works-leads to misinterpretations of the resulting scaling regimes.
Self-organized bistability is the counterpart of "self-organized criticality" (SOC), for systems tuning themselves to the edge of bistability of a discontinuous/first-order phase transition, rather than to the critical point of a continuous/second order one. The equations defining the theory of SOB turn out to be very similar to a mesoscopic (Landau-Ginzburg) theory recently proposed to describe the dynamics in the cerebral cortex. This theory describes the bistable/oscillating neuronal activity of coupled mesoscopic patches of cortex, homeostatically regulated by short-term synaptic plasticity. However, the theory for cortex dynamics entails significant differences with respect to SOB, including the lack of a (bulk) conservation law, the absence of a perfect separation of timescales between driving and dissipation and, the fact that in the former there is a parameter that controls the overall system state (in blatant contrast with the very idea of self-organization). Here, we scrutinize -by employing a combination of analytical and computational tools-the analogies and differences between both theories and explore whether in some limit SOB could play an important role to explain the emergence of scale-invariant neuronal avalanches observed empirically in the cortex. We conclude that actually, in the limit of infinitely slow synaptic-dynamics, the two theories behave identically, but the separation of timescales required for the self-organization mechanism to be effective does not seem to be biologically plausible. We discuss the key differences between selforganization mechanisms with or without conservation and separated timescales, and in particular, we scrutinize the implications of our findings in neuroscience, hopefully shedding new light into the problem of scale invariance in cortical dynamics. arXiv:1911.05382v1 [cond-mat.stat-mech]
Gene regulatory networks can be successfully modeled as Boolean networks. A much discussed hypothesis says that such model networks reproduce empirical findings the best if they are tuned to operate at criticality, i.e. at the borderline between their ordered and disordered phases. Critical networks have been argued to lead to a number of functional advantages such as maximal dynamical range, maximal sensitivity to environmental changes, as well as to an excellent tradeoff between stability and flexibility. Here, we study the effect of noise within the context of Boolean networks trained to learn complex tasks under supervision. We verify that quasi-critical networks are the ones learning in the fastest possible way –even for asynchronous updating rules– and that the larger the task complexity the smaller the distance to criticality. On the other hand, when additional sources of intrinsic noise in the network states and/or in its wiring pattern are introduced, the optimally performing networks become clearly subcritical. These results suggest that in order to compensate for inherent stochasticity, regulatory and other type of biological networks might become subcritical rather than being critical, all the most if the task to be performed has limited complexity.
The renormalization group is the cornerstone of the modern theory of universality and phase transitions and it is a powerful tool to scrutinize symmetries and organizational scales in dynamical systems. However, its application to complex networks has proven particularly challenging, owing to correlations between intertwined scales. To date, existing approaches have been based on hidden geometries hypotheses, which rely on the embedding of complex networks into underlying hidden metric spaces. Here we propose a Laplacian renormalization group diffusion-based picture for complex networks, which is able to identify proper spatiotemporal scales in heterogeneous networks. In analogy with real-space renormalization group procedures, we first introduce the concept of Kadanoff supernodes as block nodes across multiple scales, which helps to overcome detrimental small-world effects that are responsible for cross-scale correlations. We then rigorously define the momentum space procedure to progressively integrate out fast diffusion modes and generate coarse-grained graphs. We validate the method through application to several real-world networks, demonstrating its ability to perform network reduction keeping crucial properties of the systems intact.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.