Neuronal networks are interesting physical systems in various respects: they operate outside thermodynamic equilibrium [1], a consequence of directed synaptic connections that prohibit detailed balance [2]; they show relaxational dynamics and hence do not conserve but rather constantly dissipate energy; and they show collective behavior that self-organizes as a result of exposure to structured, correlated inputs and the interaction among their constituents. But their analysis is complicated by three fundamental properties: Neuronal activity is stochastic, the input-output transfer function of single neurons is non-linear, and networks show massive recurrence [3] that gives rise to strong interaction effects. They hence bear similarity with systems that are investigated in the field of (quantum) many particle systems. Here, as well, (quantum) fluctuations need to be taken into account and the challenge is to understand collective phenomena that arise from the non-linear interaction of their constituents. Not surprisingly, similar methods can in principle be used to study these two a priori distinct system classes [4][5][6][7][8].But so far, the techniques employed within theoretical neuroscience just begin to harvest this potential. Here we will take essential steps towards this goal. Concretely, we adapt methods from statistical field theory and functional renormalization group techniques to the study of neuronal dynamics.A central motivation for this work is a coherent presentation of the technical machinery, which is well-developed in other fields of physics [9], to study the statistics and in particular phase transitions in stochastic neuronal systems and to provide a bridge between the stochastic dynamics and effective descriptions with reduced complexity.The large number of synaptic inputs to each neuron in a network allows the application of mean-field theory [10-12] to explain many dynamical phenomena, among them first order phase transitions. The transition from a quiescent to a highly active state in a bistable neuronal network is a prime example of a first order phase transition in neuronal networks [12]. The activation of attractors embedded into the connectivity of a Hopfield network is a second [13]. Combined with linear response theory, network fluctuations can quantitatively be described in binary [14][15][16] and in spiking networks [17][18][19][20][21][22]. Also transitions into oscillatory states by Andronov-Hopf bifurcations are in the realm of linear response theory around a mean-field solution [23][24][25][26].Second order phase transitions in neuronal networks are more challenging because the behavior of the system is dominated by fluctuations on all length scales, so that mean-field theory and its systematic correction by loopwise expansion break down [4]. But understanding these transitions is highly interesting from a neuroscientific point of view, because networks then show large susceptibility to signals. Moreover, signatures of critical states are found ubiquitously in experiments: Par...
The hypervolume subset selection problem consists of finding a subset, with a given cardinality k, of a set of nondominated points that maximizes the hypervolume indicator. This problem arises in selection procedures of evolutionary algorithms for multiobjective optimization, for which practically efficient algorithms are required. In this article, two new formulations are provided for the two-dimensional variant of this problem. The first is a (linear) integer programming formulation that can be solved by solving its linear programming relaxation. The second formulation is a k-link shortest path formulation on a special digraph with the Monge property that can be solved by dynamic programming in [Formula: see text] time. This improves upon the result of [Formula: see text] in Bader ( 2009 ), and slightly improves upon the result of [Formula: see text] in Bringmann et al. ( 2014b ), which was developed independently from this work using different techniques. Numerical results are shown for several values of n and k.
This paper derives the Feynman rules for the diagrammatic perturbation expansion of the effective action around an arbitrary solvable problem. The perturbation expansion around a Gaussian theory is well known and composed of one-line irreducible diagrams only. For the expansions around an arbitrary, non-Gaussian problem, we show that a more general class of irreducible diagrams remains in addition to a second set of diagrams that has no analogue in the Gaussian case. The effective action is central to field theory, in particular to the study of phase transitions, symmetry breaking, effective equations of motion, and renormalization. We exemplify the method on the Ising model, where the effective action amounts to the Gibbs free energy, recovering the Thouless-Anderson-Palmer mean-field theory in a fully diagrammatic derivation. Higher order corrections follow with only minimal effort compared to existing techniques. Our results show further that the Plefka expansion and the high-temperature expansion are special cases of the general formalism presented here.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.