Perception relies on the response of populations of neurons in sensory cortex. How the response profile of a neuronal population gives rise to perception and perceptual discrimination has been conceptualized in various ways. Here we suggest that neuronal population responses represent information about our environment explicitly as Fisher information (FI), which is a local measure of the variance estimate of the sensory input. We show how this sensory information can be read out and combined to infer from the available information profile which stimulus value is perceived during a fine discrimination task. In particular, we propose that the perceived stimulus corresponds to the stimulus value that leads to the same information for each of the alternative directions, and compare the model prediction to standard models considered in the literature (population vector, maximum likelihood, maximum-a-posteriori Bayesian inference). The models are applied to human performance in a motion discrimination task that induces perceptual misjudgements of a target direction of motion by task irrelevant motion in the spatial surround of the target stimulus (motion repulsion). By using the neurophysiological insight that surround motion suppresses neuronal responses to the target motion in the center, all models predicted the pattern of perceptual misjudgements. The variation of discrimination thresholds (error on the perceived value) was also explained through the changes of the total FI content with varying surround motion directions. The proposed FI decoding scheme incorporates recent neurophysiological evidence from macaque visual cortex showing that perceptual decisions do not rely on the most active neurons, but rather on the most informative neuronal responses. We statistically compare the prediction capability of the FI decoding approach and the standard decoding models. Notably, all models reproduced the variation of the perceived stimulus values for different surrounds, but with different neuronal tuning characteristics underlying perception. Compared to the FI approach the prediction power of the standard models was based on neurons with far wider tuning width and stronger surround suppression. Our study demonstrates that perceptual misjudgements can be based on neuronal populations encoding explicitly the available sensory information, and provides testable neurophysiological predictions on neuronal tuning characteristics underlying human perceptual decisions.
Visual perception is strongly shaped by the spatial context in which stimuli are presented. Using center-surround configurations with oriented stimuli, recent studies suggest that voluntary attention critically determines which stimuli in the surround affect the percept of the central stimulus. However, evidence for attentional influences on center-surround interactions is restricted to the spatial selection of few among several surround stimuli of different orientations. Here, we extend these insights of center-surround interactions to the motion domain and show that the influence of surround information is critically shaped by feature-based attention. We used motion repulsion as an experimental test tool. When a central target motion was surrounded by a ring of motion, subjects misperceived the direction of the foveal target for particular center-surround direction differences (repulsion condition). Adding an appropriate second motion in the surround counterbalanced the effect, eliminating the repulsion. Introducing feature-based attention to one of the two superimposed directions of motion in the surround reinstated the strong contextual effects. The task relevance of the attended surround motion component effectively induced a strong motion repulsion on the foveally presented stimulus. In addition, the task relevance of the foveal stimulus also induced motion repulsion on the attended surround direction of motion. Our results show that feature-based attention to the surround strongly modulates the veridical perception of a foveally presented motion. The observed attentional effects reflect a feature-based mechanism affecting human perception, by modulating spatial interactions among sensory information and enhancing the attended direction of motion.
Brincat and Westheimer [Journal of Neurophysiology 83 (2000) 1900] have reported facilitating interactions in the discrimination of spatially separated target orientations and co-linear inducing orientations by human observers. With smaller gaps between stimuli (short-range effects), facilitating interactions were found to depend on the contrast polarity of the stimuli. With larger gaps (long-range effects), only co-linearity of the stimuli seemed necessary to produce facilitation. In our study, the dependency of facilitating interactions on the intensity (luminance) of line stimuli is investigated by measuring detection thresholds for a target line separated from the end of an inducing line by co-axial gaps ranging from 5 to 200 min of visual arc. We find facilitating interactions between target and inducing orientations, producing short-range and long-range effects similar to those reported by Brincat and Westheimer. In addition, detection thresholds as a function of the co-axial separation between target and inducing line reveal an interaction between the spatial regime of facilitating effects and the luminance of the stimuli. Short-range effects are found to be sensitive to changes in local intensity while long-range effects remain unaffected.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.