This paper introduces a self-tuning mechanism for capturing rapid adaptation to changing visual stimuli by a population of neurons. Building upon the principles of efficient sensory encoding, we show how neural tuning curve parameters can be continually updated to optimally encode a time-varying distribution of recently detected stimulus values. We implemented this mechanism in a neural model that produces human-like estimates of self-motion direction (i.e., heading) based on optic flow. The parameters of speed-sensitive units were dynamically tuned in accordance with efficient sensory encoding such that the network remained sensitive as the distribution of optic flow speeds varied. In two simulation experiments, we found that model performance with dynamic tuning yielded more accurate, shorter latency heading estimates compared to the model with static tuning. We conclude that dynamic efficient sensory encoding offers a plausible approach for capturing adaptation to varying visual environments in biological visual systems and neural models alike.
Self-motion produces characteristic patterns of optic flow on the eye of the mobile observer. Movement along linear, straight paths without eye movements yields motion that radiates from the direction of travel (heading). The observer experiences more complex motion patterns while moving along more general curvilinear (e.g. circular) paths, the appearance of which depends on the radius of the curved path (path curvature) and the direction of gaze. Neurons in brain area MSTd of primate visual cortex exhibit tuning to radial motion patterns and have been linked with linear heading perception. MSTd also contains neurons that exhibit tuning to spirals, but their function is not well understood. We investigated in a computational model whether MSTd, through its diverse pattern tuning, could support estimation of a broader range of self-motion parameters from optic flow than has been previously demonstrated. We used deep learning to decode these parameters from signals produced by neurons tuned to radial expansion, spiral, ground flow, and other patterns in a mechanistic neural model of MSTd. Specifically, we found that we could accurately decode the clockwise/counterclockwise sign of curvilinear path and the gaze direction relative to the path tangent from spiral cells; heading from radial cells; and the curvature (radius) of the curvilinear path from activation produced by both radial and spiral populations. We demonstrate accurate decoding of these linear and curvilinear self-motion parameters in both synthetic and naturalistic videos of simulated self-motion. Estimates remained stable over time, while also rapidly adapting to dynamic changes in the observer’s curvilinear self-motion. Our findings suggest that specific populations of neurons in MSTd could effectively signal important aspects of the observer’s linear and curvilinear self-motion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.