Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.DOI: http://dx.doi.org/10.7554/eLife.20899.001
Neural activity during cognitive tasks exhibits complex dynamics that flexibly encode task-relevant variables. Chaotic recurrent networks, which spontaneously generate rich dynamics, have been proposed as a model of cortical computation during cognitive tasks. However, existing methods for training these networks are either biologically implausible, and/or require a continuous, real-time error signal to guide learning. Here we show that a biologically plausible learning rule can train such recurrent networks, guided solely by delayed, phasic rewards at the end of each trial. Networks endowed with this learning rule can successfully learn nontrivial tasks requiring flexible (context-dependent) associations, memory maintenance, nonlinear mixed selectivities, and coordination among multiple outputs. The resulting networks replicate complex dynamics previously observed in animal cortex, such as dynamic encoding of task features and selective integration of sensory inputs. We conclude that recurrent neural networks offer a plausible model of cortical dynamics during both learning and performance of flexible behavior.
The open-endedness of a system is often defined as a continual production of novelty. Here we pin down this concept more fully by defining several types of novelty that a system may exhibit, classified as variation, innovation, and emergence. We then provide a meta-model for including levels of structure in a system's model. From there, we define an architecture suitable for building simulations of open-ended novelty-generating systems and discuss how previously proposed systems fit into this framework. We discuss the design principles applicable to those systems and close with some challenges for the community.
Recent evidence suggests that neurons in primary sensory cortex arrange into competitive groups, representing stimuli by their joint activity rather than as independent feature analysers. A possible explanation for these results is that sensory cortex implements attractor dynamics, although this proposal remains controversial. Here we report that fast attractor dynamics emerge naturally in a computational model of a patch of primary visual cortex endowed with realistic plasticity (at both feedforward and lateral synapses) and mutual inhibition. When exposed to natural images (but not random pixels), the model spontaneously arranges into competitive groups of reciprocally connected, similarly tuned neurons, while developing realistic, orientation-selective receptive fields. Importantly, the same groups are observed in both stimulus-evoked and spontaneous (stimulus-absent) activity. The resulting network is inhibition-stabilized and exhibits fast, non-persistent attractor dynamics. Our results suggest that realistic plasticity, mutual inhibition and natural stimuli are jointly necessary and sufficient to generate attractor dynamics in primary sensory cortex.
We attempt to provide a comprehensive answer to the question of whether, and when, an arrow of complexity emerges in Darwinian evolution. We note that this expression can be interpreted in different ways, including a passive, incidental growth, or a pervasive bias towards complexification. We argue at length that an arrow of complexity does indeed occur in evolution, which can be most reasonably interpreted as the result of a passive trend rather than a driven one. What, then, is the role of evolution in the creation of this trend, and under which conditions will it emerge? In the later sections of this article we point out that when certain proper conditions (which we attempt to formulate in a concise form) are met, Darwinian evolution predictably creates a sustained trend of increase in maximum complexity (that is, an arrow of complexity) that would not be possible without it; but if they are not, evolution will not only fail to produce an arrow of complexity, but may actually prevent any increase in complexity altogether. We conclude that, with regard to the growth of complexity, evolution is very much a double-edged sword.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.