Understanding how the nervous system achieves reliable performance using unreliable components is important for many disciplines of science and engineering, in part because it can suggest ways to lower the energetic cost of computing. In vision, retinal ganglion cells partition visual space into approximately circular regions termed receptive fields (RFs). Average RF shapes are such that they would provide maximal spatial resolution if they were centered on a perfect lattice. However, individual shapes have fine-scale irregularities. Here, we find that irregular RF shapes increase the spatial resolution in the presence of lattice irregularities from Ϸ60% to Ϸ92% of that possible for a perfect lattice. Optimization of RF boundaries around their fixed center positions reproduced experimental observations neuron-by-neuron. Our results suggest that lattice irregularities determine the shapes of retinal RFs and that similar algorithms can improve the performance of retinal prosthetics where substantial irregularities arise at their interface with neural tissue.information theory ͉ neural coding ͉ optimal design ͉ retina
We analyze a neural network model of the Eriksen task: a two-alternative forced-choice task in which subjects must correctly identify a central stimulus and disregard flankers that may or may not be compatible with it. We linearize and decouple the model, deriving a reduced drift-diffusion process with variable drift rate that describes the accumulation of net evidence in favor of either alternative, and we use this to analytically describe how accuracy and response time data depend on model parameters. Such analyses both assist parameter tuning in network models and suggest explanations of changing drift rates in terms of attention. We compare our results with numerical simulations of the full nonlinear model and with empirical data and show good fits to both with fewer parameters.
The Eriksen task is a classical paradigm that explores the effects of competing sensory inputs on response tendencies, and the nature of selective attention in controlling these processes. In this task, conflicting flanker stimuli interfere with the processing of a central target, especially on short reaction-time trials. This task has been modeled by neural networks and more recently by a normative Bayesian account. Here, we analyze the dynamics of the Bayesian models, which are nonlinear, coupled discrete-time dynamical systems, by considering simplified, approximate systems that are linear and decoupled. Analytical solutions of these allow us to describe how posterior probabilities and psychometric functions depend upon model parameters. We compare our results with numerical simulations of the original models and derive fits to experimental data, showing that agreements are rather good. We also investigate continuum limits of these simplified dynamical systems, and demonstrate that Bayesian updating is closely related to a drift-diffusion process, whose implementation in neural network models has been extensively studied. This provides insight on how neural substrates can implement Bayesian computations.
The Eriksen task is a classical paradigm that explores the effects of competing sensory inputs on response tendencies, and the nature of selective attention in controlling these processes. In this task, conflicting flanker stimuli interfere with the processing of a central target, especially on short reaction-time trials. This task has been modeled by neural networks and more recently by a normative Bayesian account. Here, we analyze the dynamics of the Bayesian models, which are nonlinear, coupled discrete-time dynamical systems, by considering simplified, approximate systems that are linear and decoupled. Analytical solutions of these allow us to describe how posterior probabilities and psychometric functions depend upon model parameters. We compare our results with numerical simulations of the original models and derive fits to experimental data, showing that agreements are rather good. We also investigate continuum limits of these simplified dynamical systems, and demonstrate that Bayesian updating is closely related to a drift-diffusion process, whose implementation in neural network models has been extensively studied. This provides insight on how neural substrates can implement Bayesian computations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.