Eye movements are an integral and essential part of our human foveated vision system. Here, we review recent work on voluntary eye movements, with an emphasis on the last decade. More selectively, we address two of the most important questions about saccadic and smooth pursuit eye movements in natural vision. First, why do we saccade to where we do? We argue that, like for many other aspects of vision, several different circuits related to salience, object recognition, actions, and value ultimately interact to determine gaze behavior. Second, how are pursuit eye movements and perceptual experience of visual motion related? We show that motion perception and pursuit have a lot in common, but they also have separate noise sources that can lead to dissociations between them. We emphasize the point that pursuit actively modulates visual perception and that it can provide valuable information for motion perception.
People can direct their gaze at a visual target for extended periods of time. Yet, even during fixation the eyes make small, involuntary movements (e.g. tremor, drift, and microsaccades). This can be a problem during experiments that require stable fixation. The shape of a fixation target can be easily manipulated in the context of many experimental paradigms. Thus, from a purely methodological point of view, it would be good to know if there was a particular shape of a fixation target that minimizes involuntary eye movements during fixation, because this shape could then be used in experiments that require stable fixation. Based on this methodological motivation, the current experiments tested if the shape of a fixation target can be used to reduce eye movements during fixation. In two separate experiments subjects directed their gaze at a fixation target for 17s on each trial. The shape of the fixation target varied from trial to trial and was drawn from a set of seven shapes, the use of which has been frequently reported in the literature. To determine stability of fixation we computed spatial dispersion and microsaccade rate. We found that only a target shape which looks like a combination of bulls eye and cross hair resulted in combined low dispersion and microsaccade rate. We recommend the combination of bulls eye and cross hair as fixation target shape for experiments that require stable fixation.
Due to the inhomogenous visual representation across the visual field, humans use peripheral vision to select objects of interest and foveate them by saccadic eye movements for further scrutiny. Thus, there is usually peripheral information available before and foveal information after a saccade. In this study we investigated the integration of information across saccades. We measured reliabilities--i.e., the inverse of variance-separately in a presaccadic peripheral and a postsaccadic foveal orientation--discrimination task. From this, we predicted trans-saccadic performance and compared it to observed values. We show that the integration of incongruent peripheral and foveal information is biased according to their relative reliabilities and that the reliability of the trans-saccadic information equals the sum of the peripheral and foveal reliabilities. Both results are consistent with and indistinguishable from statistically optimal integration according to the maximum-likelihood principle. Additionally, we tracked the gathering of information around the time of the saccade with high temporal precision by using a reverse correlation method. Information gathering starts to decline between 100 and 50 ms before saccade onset and recovers immediately after saccade offset. Altogether, these findings show that the human visual system can effectively use peripheral and foveal information about object features and that visual perception does not simply correspond to disconnected snapshots during each fixation.
Humans shift their gaze to a new location several times per second. It is still unclear what determines where they look next. Fixation behavior is influenced by the low-level salience of the visual stimulus, such as luminance, contrast, and color, but also by highlevel task demands and prior knowledge. Under natural conditions, different sources of information might conflict with each other and have to be combined. In our paradigm, we trade off visual salience against expected value. We show that both salience and value information influence the saccadic end point within an object, but with different time courses. The relative weights of salience and value are not constant but vary from eye movement to eye movement, depending critically on the availability of the value information at the time when the saccade is programmed. Shortlatency saccades are determined mainly by salience, but value information is taken into account for long-latency saccades. We present a model that describes these data by dynamically weighting and integrating detailed topographic maps of visual salience and value. These results support the notion of independent neural pathways for the processing of visual information and value.neuroeconomics | decision-making | cue combination | visual perception B ecause of foveal specialization for high acuity and color vision, humans frequently move their eyes to project different parts of the visual scene on the fovea. Although the basic networks for the programming and execution of saccades have been studied for decades (1, 2), surprisingly little is known about the neural processes that underlie selection of the point of fixation of the next saccade. To some degree, the weighted combination of basic visual-stimulus features can predict saccadic eye movements in natural scenes (3-5). These basic stimulus features are, among others, local differences in luminance, color, or orientation and are combined by the visual system in a bottom-up image-based salience map. However, the salience difference between fixated and nonfixated image locations is typically rather small (6, 7), indicating that the influence of salience may be modulated by other factors. Visual salience, by definition, is determined by features of the visual scene alone and therefore is determined exclusively by visual bottom-up processing. Other factors reflect the influence of top-down processing. Task demands, for example, exhibit constraints on gaze patterns in different activities such as visual searching (8), manipulating an object (9), playing ball sports, preparing a cup of tea (10), and navigating between obstacles (11). In all these examples, gaze is concentrated on objects that are relevant for the task.Along different lines, recent research in neuroeconomics has used saccadic eye movements as a tool to uncover the neural bases of primate choice behavior. The results of these experiments indicate that value can be an important determinant of the neural activity underlying the selection of a saccadic target when one object bear...
Spering M, Schütz AC, Braun DI, Gegenfurtner KR. Keep your eyes on the ball: smooth pursuit eye movements enhance prediction of visual motion.
Abstract. Domain ontologies very rarely model verbs as relations holding between concepts. However, the role of the verb as a central connecting element between concepts is undeniable. Verbs specify the interaction between the participants of some action or event by expressing relations between them. In parallel, it can be argued from an ontology engineering point of view that verbs express a relation between two classes that specify domain and range. The work described here is concerned with relation extraction for ontology extension along these lines. We describe a system (RelExt) that is capable of automatically identifying highly relevant triples (pairs of concepts connected by a relation) over concepts from an existing ontology. RelExt works by extracting relevant verbs and their grammatical arguments (i.e. terms) from a domain-specific text collection and computing corresponding relations through a combination of linguistic and statistical processing. The paper includes a detailed description of the system architecture and evaluation results on a constructed benchmark. RelExt has been developed in the context of the SmartWeb project, which aims at providing intelligent information services via mobile broadband devices on the FIFA World Cup that will be hosted in Germany in 2006. Such services include location based navigational information as well as question answering in the football domain.
Visual processing varies dramatically across the visual field. These differences start in the retina and continue all the way to the visual cortex. Despite these differences in processing, the perceptual experience of humans is remarkably stable and continuous across the visual field. Research in the last decade has shown that processing in peripheral and foveal vision is not independent, but is more directly connected than previously thought. We address three core questions on how peripheral and foveal vision interact, and review recent findings on potentially related phenomena that could provide answers to these questions. First, how is the processing of peripheral and foveal signals related during fixation? Peripheral signals seem to be processed in foveal retinotopic areas to facilitate peripheral object recognition, and foveal information seems to be extrapolated toward the periphery to generate a homogeneous representation of the environment. Second, how are peripheral and foveal signals re-calibrated? Transsaccadic changes in object features lead to a reduction in the discrepancy between peripheral and foveal appearance. Third, how is peripheral and foveal information stitched together across saccades? Peripheral and foveal signals are integrated across saccadic eye movements to average percepts and to reduce uncertainty. Together, these findings illustrate that peripheral and foveal processing are closely connected, mastering the compromise between a large peripheral visual field and high resolution at the fovea. Brief overview of differences between peripheral and foveal vision Although the human eye is often compared to a photographic camera, processing across the visual field is not homogeneous like in a camera film or a digital sensor. First, there are gaps in sensory information due to several anatomical properties of the eye: (a) there are no photoreceptors in the optic disc, where the axons of the retinal ganglion cells exit the eyeball: this leads to a blind spot (Mariotte, 1740, cited after Ferree & Rand, 1912; Grzybowski & Aydin, 2007). (b) The center of the retina contains only cone, but no rod photoreceptors (Schultze, 1866; Oesterberg, 1935; Curcio, Sloan, Kalina, & Hendrickson, 1990), leading to a central scotoma under dark illumination conditions. (c) Because photoreceptors are located on the back side of the retina, away from the light, blood vessels cast shadows on them (Purkinje, 1819; von Helmholtz, 1867; Evans, 1927; Adams & Horton, 2002). The second striking difference to a photographic camera is that the processing of visual signals varies quite dramatically across the visual field. Here, an important distinction arises between the center of the visual field, called the fovea, and the rest, called the periphery. 1 We only briefly highlight some of the key differences in processing and perception between the fovea and the periphery because these have been reviewed in detail elsewhere
The human motor system and muscles are subject to fluctuations in the short and long term. Motor adaptation is classically thought of as a low-level process that compensates for the error between predicted and executed movements in order to maintain movement accuracy. Contrary to a low-level account, accurate movements might be only a means to support high-level behavioral and perceptual goals. To isolate the influence of high-level goals in adaptation of saccadic eye movements, we manipulated perceptual task requirements in the absence of low-level errors. Observers had to discriminate one character within a peripheral array of characters. Between trials, the location of this character within the array was changed. This manipulation led to an immediate strategic change and a slower, gradual adaptation of saccade amplitude and direction. These changes had a similar magnitude to classical saccade adaptation and transferred at least partially to reactive saccades without a perceptual task. These results suggest that a perceptual task can modify oculomotor commands by generating a top-down error signal in saccade maps just like a bottom-up visual position error. Hence saccade adaptation not only maintains saccadic targeting accuracy, but also optimizes gaze behavior for the behavioral goal, showing that perception shapes even low-level oculomotor mechanisms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.