10The brain consists of many interconnected networks with time-varying, partially 11 autonomous activity. There are multiple sources of noise and variation yet activity has to 12 eventually converge to a stable, reproducible state (or sequence of states) for its 13 computations to make sense. We approached this problem from a control-theory 14 perspective by applying contraction analysis to recurrent neural networks. This allowed 15 us to find mechanisms for achieving stability in multiple connected networks with 16 biologically realistic dynamics, including synaptic plasticity and time-varying inputs. These 17 mechanisms included inhibitory Hebbian plasticity, excitatory anti-Hebbian plasticity, 18 synaptic sparsity and excitatory-inhibitory balance. Our findings shed light on how stable 19 computations might be achieved despite biological complexity.20 21 2 25often vary between identical trials 1,2 . This can be due to a variety of factors including 26 variability in membrane potentials, inputs, plastic changes due to recent experience and 27 so on. Yet, in spite of these fluctuations, brain networks must achieve computational 28 stability: despite being "knocked around" by plasticity and noise, the behavioral output of 29 the brain on two experimentally identical trials needs to be similar. How is this stability 30 achieved?
31Stability has played a central role in computational neuroscience since the 1980's, 32 with the advent of models of associative memory that stored neural activation patterns as 33 stable point attractors 3-7 , although researchers were thinking about the brain's stability 34 since as early as the 1950's 8 . The vast majority of this work is concerned with the stability 35 of activity around points, lines, or planes in neural state space 9,10 . However, recent 36 neurophysiological studies have revealed that in many cases, single-trial neural activity 37 is highly dynamic, and therefore potentially inconsistent with a static attractor viewpoint 38 1,11 . Consequently, there has been a number of recent studies-both computational and 39 experimental-which focus more broadly on the stability of neural trajectories 12,13 . 40 While these studies provide important empirical results and intuitions, they do not 41 offer analytical insight into mechanisms for achieving stable trajectories in recurrent 42 neural networks. Nor do they offer insights into achieving such stability in plastic (or multi-43 modal) networks. Here we focus on finding conditions that guarantee stable trajectories 44 in recurrent neural networks and thus shed light onto how stable trajectories might be 45 achieved in vivo. 48 the population activity of a contracting network will converge towards the same trajectory, 49 thus achieving stable dynamics (Figure 1). One way to understand contraction is to 50 represent the state of a network at a given time as a point in the network's 'state-space', 51 for instance the space spanned by the possible firing rates of all the networks' neurons. 52 This state-space has the same num...