The brain consists of many interconnected networks with time-varying, partially autonomous activity. There are multiple sources of noise and variation yet activity has to eventually converge to a stable, reproducible state (or sequence of states) for its computations to make sense. We approached this problem from a control-theory perspective by applying contraction analysis to recurrent neural networks. This allowed us to find mechanisms for achieving stability in multiple connected networks with biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable computations might be achieved despite biological complexity. Crucially, our analysis is not limited to analyzing the stability of fixed geometric objects in state space (e.g points, lines, planes), but rather the stability of state trajectories which may be complex and time-varying.
Working memory has long been thought to arise from sustained spiking/attractor dynamics. However, recent work has suggested that short-term synaptic plasticity (STSP) may help maintain attractor states over gaps in time with little or no spiking. To determine if STSP endows additional functional advantages, we trained artificial recurrent neural networks (RNNs) with and without STSP to perform an object working memory task. We found that RNNs with and without STSP were able to maintain memories despite distractors presented in the middle of the memory delay. However, RNNs with STSP showed activity that was similar to that seen in the cortex of a non-human primate (NHP) performing the same task. By contrast, RNNs without STSP showed activity that was less brain-like. Further, RNNs with STSP were more robust to network degradation than RNNs without STSP. These results show that STSP can not only help maintain working memories, it also makes neural networks more robust and brain-like.
10The brain consists of many interconnected networks with time-varying, partially 11 autonomous activity. There are multiple sources of noise and variation yet activity has to 12 eventually converge to a stable, reproducible state (or sequence of states) for its 13 computations to make sense. We approached this problem from a control-theory 14 perspective by applying contraction analysis to recurrent neural networks. This allowed 15 us to find mechanisms for achieving stability in multiple connected networks with 16 biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These 17 mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, 18 synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable 19 computations might be achieved despite biological complexity.20 21 2 25often vary between identical trials 1,2 . This can be due to a variety of factors including 26 variability in membrane potentials, inputs, plastic changes due to recent experience and 27 so on. Yet, in spite of these fluctuations, brain networks must achieve computational 28 stability: despite being "knocked around" by plasticity and noise, the behavioral output of 29 the brain on two experimentally identical trials needs to be similar. How is this stability 30 achieved? 31Stability has played a central role in computational neuroscience since the 1980's, 32 with the advent of models of associative memory that stored neural activation patterns as 33 stable point attractors 3-7 , although researchers were thinking about the brain's stability 34 since as early as the 1950's 8 . The vast majority of this work is concerned with the stability 35 of activity around points, lines, or planes in neural state space 9,10 . However, recent 36 neurophysiological studies have revealed that in many cases, single-trial neural activity 37 is highly dynamic, and therefore potentially inconsistent with a static attractor viewpoint 38 1,11 . Consequently, there has been a number of recent studies-both computational and 39 experimental-which focus more broadly on the stability of neural trajectories 12,13 . 40 While these studies provide important empirical results and intuitions, they do not 41 offer analytical insight into mechanisms for achieving stable trajectories in recurrent 42 neural networks. Nor do they offer insights into achieving such stability in plastic (or multi-43 modal) networks. Here we focus on finding conditions that guarantee stable trajectories 44 in recurrent neural networks and thus shed light onto how stable trajectories might be 45 achieved in vivo. 48 the population activity of a contracting network will converge towards the same trajectory, 49 thus achieving stable dynamics (Figure 1). One way to understand contraction is to 50 represent the state of a network at a given time as a point in the network's 'state-space', 51 for instance the space spanned by the possible firing rates of all the networks' neurons. 52 This state-space has the same num...
We prove that Riemannian contraction in a supervised learning setting implies generalization. Specifically, we show that if an optimizer is contracting in some Riemannian metric with rate λ > 0, it is uniformly algorithmically stable with rate O(1/λn), where n is the number of labelled examples in the training set. The results hold for stochastic and deterministic optimization, in both continuous and discrete-time, for convex and non-convex loss surfaces. The associated generalization bounds reduce to well-known results in the particular case of gradient descent over convex or strongly convex loss surfaces. They can be shown to be optimal in certain linear settings, such as kernel ridge regression under gradient flow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.