Learning a task induces connectivity changes in neural circuits, thereby changing their dynamics. To elucidate task related neural dynamics we study trained Recurrent Neural Networks. We develop a Mean Field Theory for Reservoir Computing networks trained to have multiple fixed point attractors. Our main result is that the dynamics of the network's output in the vicinity of attractors is governed by a low order linear Ordinary Differential Equation. Stability of the resulting ODE can be assessed, predicting training success or failure. As a consequence, networks of Rectified Linear (RLU) and of sigmoidal nonlinearities are shown to have diametrically different properties when it comes to learning attractors. Furthermore, a characteristic time constant, which remains finite at the edge of chaos, offers an explanation of the network's output robustness in the presence of variability of the internal neural dynamics. Finally, the proposed theory predicts state dependent frequency selectivity in network response.Task learning is considered the raison d'etre of recurrent neural networks (RNN), studied in the context of neuroscience and machine learning [1,2]. Yet, theoretical understanding of trained RNN dynamics is lacking, with most of the existing physics literature addressing either random networks, designed networks ([3, 4] and [5]) or designed control setting [6][7][8].In this Letter, we advance a theory of trained RNN dynamics. We consider an initially random, chaotic network whose output is trained to produce several target values, and then fed back to the network, yielding multiple fixed point attractors. This setting underlies complex tasks that were analyzed phenomenologically using rate models [1, 9, 10], and are the subjects of attempts [11] to extend to more realistic task performing networks [12]. Using mean field analysis, we derive the effect of training on the output dynamics in the vicinity of the training targets. Stability is then assessed, showing that training success depends on the network's nonlinearity. Next, we show that multiple training targets can lead to state specific frequency selectivity, as observed in task adapted biological neuronal circuits [13,14]. Finally, the settling time of an output of a perturbed RNN is shown to remain finite at the edge of the chaos, contrary to the varying internal state dynamics [15,16], for which the settling time is known to diverge [17].Model and Training Protocol Reservoir computing [18,19] is a popular and simple paradigm for training RNN. A network of neurons with random recurrent connectivity (referred to as the reservoir) is equipped with readout weights trained to produce a desired output, while keeping the rest of the connectivity fixed. Such a restricted training rule implies that training affects reservoir dynamics only via feedback connections from the output [19,20]. The dynamics ([20], [17,21,22]) are given by:ẋ = −x + W r + w F B z + w in uwith state x ∈ R N representing the synaptic input, and the firing rate given by r(t) = φ(x(t)) whe...
Biological networks are often heterogeneous in their connectivity pattern, with degree distributions featuring a heavy tail of highly connected hubs. The implications of this heterogeneity on dynamical properties are a topic of much interest. Here we show that interpreting topology as a feedback circuit can provide novel insights on dynamics. Based on the observation that in finite networks a small number of hubs have a disproportionate effect on the entire system, we construct an approximation by lumping these nodes into a single effective hub, which acts as a feedback loop with the rest of the nodes. We use this approximation to study dynamics of networks with scale-free degree distributions, focusing on their probability of convergence to fixed points. We find that the approximation preserves convergence statistics over a wide range of settings. Our mapping provides a parametrization of scale free topology which is predictive at the ensemble level and also retains properties of individual realizations. Specifically, outgoing hubs have an organizing role that can drive the network to convergence, in analogy to suppression of chaos by an external drive. In contrast, incoming hubs have no such property, resulting in a marked difference between the behavior of networks with outgoing vs. incoming scale free degree distribution. Combining feedback analysis with mean field theory predicts a transition between convergent and divergent dynamics which is corroborated by numerical simulations. Furthermore, they highlight the effect of a handful of outlying hubs, rather than of the connectivity distribution law as a whole, on network dynamics.
Abstract. We show that the spacing between eigenvalues of the discrete 1D Hamiltonian with arbitrary potentials which are bounded, and with Dirichlet or Neumann Boundary Conditions is bounded away from zero. We prove an explicit lower bound, given by Ce −bN , where N is the lattice size, and C and b are some finite constants. In particular, the spectra of such Hamiltonians have no degenerate eigenvalues. As applications we show that to leading order in the coupling, the solution of a nonlinearly perturbed Anderson model in one-dimension (on the lattice) remains exponentially localized, in probability and average sense for initial conditions given by a unique eigenfunction of the linear problem. We also bound the derivative of the eigenfunctions of the linear Anderson model with respect to a potential change.Eigenvalue repulsion estimates...
Manifold attractors are a key framework for understanding how continuous variables, such as position or head direction, are encoded in the brain. In this framework, the variable is represented along a continuum of persistent neuronal states which forms a manifold attactor. Neural networks with symmetric synaptic connectivity that can implement manifold attractors have become the dominant model in this framework. In addition to a symmetric connectome, these networks imply homogeneity of individual-neuron tuning curves and symmetry of the representational space; these features are largely inconsistent with neurobiological data. Here, we developed a theory for computations based on manifold attractors in trained neural networks and show how these manifolds can cope with diverse neuronal responses, imperfections in the geometry of the manifold and a high level of synaptic heterogeneity. In such heterogeneous trained networks, a continuous representational space emerges from a small set of stimuli used for training. Furthermore, we find that the network response to external inputs depends on the geometry of the representation and on the level of synaptic heterogeneity in an analytically tractable and interpretable way. Finally, we show that a too complex geometry of the neuronal representation impairs the attractiveness of the manifold and may lead to its destabilization. Our framework reveals that continuous features can be represented in the recurrent dynamics of heterogeneous networks without assuming unrealistic symmetry. It suggests that the representational space of putative manifold attractors in the brain dictates the dynamics in their vicinity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.