Human vision is an active process in which information is sampled during brief periods of stable fixation in between gaze shifts. Foveal analysis serves to identify the currently fixated object and has to be coordinated with a peripheral selection process of the next fixation location. Models of visual search and scene perception typically focus on the latter, without considering foveal processing requirements. We developed a dual-task noise classification technique that enables identification of the information uptake for foveal analysis and peripheral selection within a single fixation. Human observers had to use foveal vision to extract visual feature information (orientation) from different locations for a psychophysical comparison. The selection of to-be-fixated locations was guided by a different feature (luminance contrast). We inserted noise in both visual features and identified the uptake of information by looking at correlations between the noise at different points in time and behavior. Our data show that foveal analysis and peripheral selection proceeded completely in parallel. Peripheral processing stopped some time before the onset of an eye movement, but foveal analysis continued during this period. Variations in the difficulty of foveal processing did not influence the uptake of peripheral information and the efficacy of peripheral selection, suggesting that foveal analysis and peripheral selection operated independently. These results provide important theoretical constraints on how to model target selection in conjunction with foveal object identification: in parallel and independently.A lmost all human visually guided behavior relies on the selective uptake of information, due to sensory and cognitive limitations. On the sensory side, the sampling of visual input by the retinal mosaic of photoreceptors becomes increasingly sparse and irregular away from central vision (1). In addition, fewer cortical neurons are devoted to the analysis of peripheral visual information (cortical magnification) (2, 3). Humans and other animals with so-called foveated visual systems have evolved gazeshifting mechanisms to overcome these limitations. Saccadic eye movements serve to rapidly and efficiently deploy gaze to objects and regions of interest in the visual field. Sampling the environment appropriately with gaze is the starting point of adaptive visual-motor behavior (4, 5).Studies have shown that saccadic eye movements are guided by analysis of information in the visual periphery up to 80-100 ms before saccade execution (6-8). However, active vision typically requires humans not only also to analyze information in the visual periphery to decide where to fixate next (peripheral selection), but also to analyze the information at the current fixation location (foveal analysis). Not much is known about how foveal analysis and peripheral selection are coordinated and interact. In this regard, we need to know (i) whether and to what extent foveal analysis and peripheral selection are constrained by a common bo...
Simultaneously adapting to retinal motion and non-collinear pursuit eye movement produces a motion aftereffect (MAE) that moves in a different direction to either of the individual adapting motions. Mack, Hill and Kahn (1989, Perception, 18, 649-655) suggested that the MAE was determined by the perceived motion experienced during adaptation. We tested the perceived-motion hypothesis by having observers report perceived direction during simultaneous adaptation. For both central and peripheral retinal motion adaptation, perceived direction did not predict the direction of subsequent MAE. To explain the findings we propose that the MAE is based on the vector sum of two components, one corresponding to a retinal MAE opposite to the adapting retinal motion and the other corresponding to an extra-retina MAE opposite to the eye movement. A vector model of this component hypothesis showed that the MAE directions reported in our experiments were the result of an extra-retinal component that was substantially larger in magnitude than the retinal component when the adapting retinal motion was positioned centrally. However, when retinal adaptation was peripheral, the model suggested the magnitude of the components should be about the same. These predictions were tested in a final experiment that used a magnitude estimation technique. Contrary to the predictions, the results showed no interaction between type of adaptation (retinal or pursuit) and the location of adapting retinal motion. Possible reasons for the failure of component hypothesis to fully explain the data are discussed.
In typical natural environments, the visual system receives different inputs in quick succession as gaze moves around. We examined whether local trans-saccadic differences in luminance, contrast, and orientation influenced perception and target selection in the eye movement system. Observers initially fixated a peripheral position in a preview display that consisted of four patterns. They subsequently made a saccade to the center of the configuration. During the movement, two of the preview patterns were eliminated, and a small change in the luminance contrast of the remaining patterns was introduced. Observers had to make a second saccade to the test patch with the greater luminance contrast relative to the background. During the second fixation, test patterns could be in the same retinotopic location as one of the preview patterns during the initial fixation (a retinotopic match) or at a retinotopic location that was empty during the preview epoch (a retinotopic onset). We consistently found a preference to fixate retinotopic onsets over retinotopically matched patterns, but only when the patterns were defined by a luminance difference. Direct measurement of perceived luminance showed that the visual response to retinotopically matched inputs was attenuated, possibly because of retinotopic adaptation. As a consequence, the visual system responds more strongly to trans-saccadic differences in local luminance. We argue that a trans-saccadic comparison of the local luminance at the same retinotopic location is a simple way of finding high spatial frequency edge information in the visual scene. This information is important for image segmentation and interpretation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.