Due to the inhomogenous visual representation across the visual field, humans use peripheral vision to select objects of interest and foveate them by saccadic eye movements for further scrutiny. Thus, there is usually peripheral information available before and foveal information after a saccade. In this study we investigated the integration of information across saccades. We measured reliabilities--i.e., the inverse of variance-separately in a presaccadic peripheral and a postsaccadic foveal orientation--discrimination task. From this, we predicted trans-saccadic performance and compared it to observed values. We show that the integration of incongruent peripheral and foveal information is biased according to their relative reliabilities and that the reliability of the trans-saccadic information equals the sum of the peripheral and foveal reliabilities. Both results are consistent with and indistinguishable from statistically optimal integration according to the maximum-likelihood principle. Additionally, we tracked the gathering of information around the time of the saccade with high temporal precision by using a reverse correlation method. Information gathering starts to decline between 100 and 50 ms before saccade onset and recovers immediately after saccade offset. Altogether, these findings show that the human visual system can effectively use peripheral and foveal information about object features and that visual perception does not simply correspond to disconnected snapshots during each fixation.
When interacting with our environment we generally make use of egocentric and allocentric object information by coding object positions relative to the observer or relative to the environment, respectively. Bayesian theories suggest that the brain integrates both sources of information optimally for perception and action. However, experimental evidence for egocentric and allocentric integration is sparse and has only been studied using abstract stimuli lacking ecological relevance. Here, we investigated the use of egocentric and allocentric information during memory-guided reaching to images of naturalistic scenes. Participants encoded a breakfast scene containing six objects on a table (local objects) and three objects in the environment (global objects). After a 2 s delay, a visual test scene reappeared for 1 s in which 1 local object was missing (= target) and of the remaining, 1, 3 or 5 local objects or one of the global objects were shifted to the left or to the right. The offset of the test scene prompted participants to reach to the target as precisely as possible. Only local objects served as potential reach targets and thus were task-relevant. When shifting objects we predicted accurate reaching if participants only used egocentric coding of object position and systematic shifts of reach endpoints if allocentric information were used for movement planning. We found that reaching movements were largely affected by allocentric shifts showing an increase in endpoint errors in the direction of object shifts with the number of local objects shifted. No effect occurred when one local or one global object was shifted. Our findings suggest that allocentric cues are indeed used by the brain for memory-guided reaching towards targets in naturalistic visual scenes. Moreover, the integration of egocentric and allocentric object information seems to depend on the extent of changes in the scene.
When judging the heaviness of two objects with equal mass, people perceive the smaller and denser of the two as being heavier. Despite the large number of theories, covering bottom-up and top-down approaches, none of them can fully account for all aspects of this size-weight illusion and thus for human heaviness perception. Here we propose a new maximum-likelihood estimation model which describes the illusion as the weighted average of two heaviness estimates with correlated noise: One estimate derived from the object’s mass, and the other from the object’s density, with estimates’ weights based on their relative reliabilities. While information about mass can directly be perceived, information about density will in some cases first have to be derived from mass and volume. However, according to our model at the crucial perceptual level, heaviness judgments will be biased by the objects’ density, not by its size. In two magnitude estimation experiments, we tested model predictions for the visual and the haptic size-weight illusion. Participants lifted objects which varied in mass and density. We additionally varied the reliability of the density estimate by varying the quality of either visual (Experiment 1) or haptic (Experiment 2) volume information. As predicted, with increasing quality of volume information, heaviness judgments were increasingly biased towards the object’s density: Objects of the same density were perceived as more similar and big objects were perceived as increasingly lighter than small (denser) objects of the same mass. This perceived difference increased with an increasing difference in density. In an additional two-alternative forced choice heaviness experiment, we replicated that the illusion strength increased with the quality of volume information (Experiment 3). Overall, the results highly corroborate our model, which seems promising as a starting point for a unifying framework for the size-weight illusion and human heaviness perception.
When humans have to choose between different options, they can maximize their payoff by choosing the option that yields the highest reward. Information about reward is not only used to optimize decisions but also for movement preparation to minimize reaction times to rewarded targets. Here, we show that this is especially true in contexts in which participants additionally have to choose between different options. We probed eye movement preparation by measuring saccade latencies to differently rewarded single targets (single-trial) appearing left or right from fixation. In choice-trials, both targets were displayed and participants were free to decide for one target to receive the corresponding reward. In blocks without choice-trials, single-trial latencies were not or only weakly affected by reward. With choice-trials present, the influence of reward increased with the proportion and difficulty of choices and decreased when a cue indicated that no choice will be necessary. Choices caused a delay in subsequent single-trial responses to the non-chosen option. Taken together, our results suggest that reward affects saccade preparation mainly when the outcome is uncertain and depends on the participants’ behavior, for instance when they have to choose between targets differing in reward.
Humans scan their visual environment using saccade eye movements. Where we look is influenced by bottom-up salience and top-down factors, like value. For reactive saccades in response to suddenly appearing stimuli, it has been shown that short-latency saccades are biased towards salience, and that top-down control increases with increasing latency. Here, we show, in a series of six experiments, that this transition towards top-down control is not determined by the time it takes to integrate value information into the saccade plan, but by the time it takes to inhibit suddenly appearing salient stimuli. Participants made consecutive saccades to three fixation crosses and a vertical bar consisting of a high-salient and a rewarded low-salient region. Endpoints on the bar were biased towards salience whenever it appeared or reappeared shortly before the last saccade was initiated. This was also true when the eye movement was already planned. When the location of the suddenly appearing salient region was predictable, saccades were aimed in the opposite direction to nullify this sudden onset effect. Successfully inhibiting salience, however, could only be achieved by previewing the target. These findings highlight the importance of inhibition for top-down eye-movement control.
Positive outcome of actions can be maximized by choosing the option with the highest reward. For saccades, it has recently been suggested that the necessity to choose is, in fact, an important factor mediating reward effects: latencies to single low-reward targets increased with an increasing proportion of interleaved choice-trials, in which participants were free to choose between two targets to obtain either a high or low reward. Here, we replicate this finding for manual responses, demonstrating that this effect of choice is a more general, effector-independent phenomenon. Oscillatory activity in the alpha and beta band in the preparatory period preceding target onset was analysed for a parieto-occipital and a centrolateral region of interest to identify an anticipatory neural biasing mechanism related to visuospatial attention or motor preparation. When the proportion of interleaved choices was high, an increase in lateralized posterior alpha power indicated that the hemifield associated with a low reward was suppressed in preparation for reward-maximizing target selection. The larger the individual increase in lateralized alpha power, the slower the reaction times to low-reward targets. At a broader level, these findings support the notion that reward only affects responses when behaviour can be optimized to maximize positive outcome.
What we see is influenced by where we look. When confronted with multiple relevant targets, inaccurate saccade target selection can impair perceptual performance. Here we ask whether endpoint selection can be optimized by the mechanism maintaining saccade accuracy: saccade adaptation. Therefore, we introduce a double-target adaptation task, where a presaccadic peripheral stimulus (plaid) splits vertically into its two components (Gabor patches) during horizontal saccades. While both targets were task-relevant, one of them provided more information for the perceptual task, because it could only be identified after the saccade with near-foveal vision. The other target was highly salient and could also be identified in the presaccadic plaid using peripheral vision. This double-target paradigm induced saccade adaptation: Without a perceptual task, participants adapted to the salient target. When both targets were judged sequentially, participants mostly adapted to the target they had to judge first. When targets were judged simultaneously, endpoints were biased toward the informative target but showed no gradual learning and fell short of optimality. We observed gradual adaptation when targets shifted randomly such that a strategic adjustment of endpoints was not possible. Overall, these findings show that when multiple targets compete, our oculomotor system can learn to adjust endpoints in order to maximize information for perception. Yet individual variability and other factors affecting target priority play a crucial role.
Humans and other primates are equipped with a foveated visual system. As a consequence, we reorient our fovea to objects and targets in the visual field that are conspicuous or that we consider relevant or worth looking at. These reorientations are achieved by means of saccadic eye movements. Where we saccade to depends on various low-level factors such as a targets’ luminance but also crucially on high-level factors like the expected reward or a targets’ relevance for perception and subsequent behavior. Here, we review recent findings how the control of saccadic eye movements is influenced by higher-level cognitive processes. We first describe the pathways by which cognitive contributions can influence the neural oculomotor circuit. Second, we summarize what saccade parameters reveal about cognitive mechanisms, particularly saccade latencies, saccade kinematics and changes in saccade gain. Finally, we review findings on what renders a saccade target valuable, as reflected in oculomotor behavior. We emphasize that foveal vision of the target after the saccade can constitute an internal reward for the visual system and that this is reflected in oculomotor dynamics that serve to quickly and accurately provide detailed foveal vision of relevant targets in the visual field.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.