Survival in the natural environment often relies on an animal’s ability to quickly and accurately predict the trajectories of moving objects. Motion prediction is primarily understood in the context of translational motion, but the environment contains other types of behaviorally salient motion, such as that produced by approaching or receding objects. However, the neural mechanisms that detect and predictively encode these motion types remain unclear. Here, we address these questions in the macaque monkey retina. We report that four of the parallel output pathways in the primate retina encode predictive information about the future trajectory of moving objects. Predictive encoding occurs both for translational motion and for higher-order motion patterns found in natural vision. Further, predictive encoding of these motion types is nearly optimal with transmitted information approaching the theoretical limit imposed by the stimulus itself. These findings argue that natural selection has emphasized encoding of information that is relevant for anticipating future properties of the environment.
One of the most intriguing observations of recurrent neural circuits is their flexibility. Seemingly, this flexibility extends far beyond the ability to learn, but includes the ability to use learned procedures to respond to novel situations. Here, we report that this flexibility arises from the synergistic interplay between recurrent mutual excitation and recurrent mutual inhibition. Specifically, we show that mutual inhibition is critical in expanding the functionality of the circuit, far beyond what feedback inhibition alone can accomplish. By taking advantage of dynamical systems theory and bifurcation analysis, we show mutual inhibition doubles the number of cusp bifurcations in the system in small neural circuits. As a concrete example, we build a simulation model of a class of functional motifs we call Coupled Recurrent inhibitory and Recurrent excitatory Loops (CRIRELs). These CRIRELs have the advantage of being multi-functional, performing a plethora of functions, including decisions, switches, toggles, central pattern generators, depending solely on the input type. We then use bifurcation theory to show how mutual inhibition gives rise to this broad repertoire of possible functions. Finally, we demonstrate how this trend also holds for larger networks, and how mutual inhibition greatly expands the amount of information a recurrent network can hold.
The ability to decide swiftly and accurately in an urgent scenario is crucial for an organism's survival. The neural mechanisms underlying the perceptual decision and trade-off between speed and accuracy have been extensively studied in the past few decades. Among several theoretical models, the attractor neural network model has successfully captured both behavioral and neuronal data observed in many decision experiments. However, a recent experimental study revealed additional details that were not considered in the original attractor model. In particular, the study shows that the inhibitory neurons in the posterior parietal cortex of mice are as selective to decision making results as the excitatory neurons, whereas the original attractor model assumes the inhibitory neurons to be unselective. In this study, we investigate a more general attractor model with selective inhibition, and analyze in detail how the computational ability of the network changes with selectivity. We proposed a reduced model for the selective model, and showed that selectivity adds a time-varying component to the energy landscape. This time dependence of the energy landscape allows the selective model to integrate information carefully in initial stages, then quickly converge to an attractor once the choice is clear. This results in the selective model having a more efficient speed-accuracy trade-off that is unreachable by unselective models.
Successful behavior relies on the ability to use information obtained from past experience to predict what is likely to occur in the future. A salient example of predictive encoding comes from the vertebrate retina, where neural circuits encode information that can be used to estimate the trajectory of a moving object. Predictive computations should be a general property of sensory systems, but the features needed to identify these computations across neural systems are not well understood. Here, we identify several properties of predictive computations in the primate retina that likely generalize across sensory systems. These features include calculating the derivative of incoming signals, sparse signal integration, and delayed response suppression. These findings provide a deeper understanding of how the brain carries out predictive computations and identify features that can be used to recognize these computations throughout the brain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.