The vast majority of research on optic flow (retinal motion arising because of observer movement) has focused on its use in heading recovery and guidance of locomotion. Here we demonstrate that optic flow processing has an important role in the detection and estimation of scene-relative object movement during self movement. To do this, the brain identifies and globally discounts (i.e., subtracts) optic flow patterns across the visual scene-a process called flow parsing. Remaining motion can then be attributed to other objects in the scene. In two experiments, stationary observers viewed radial expansion flow fields and a moving probe at various onscreen locations. Consistent with global discounting, perceived probe motion had a significant component toward the center of the display and the magnitude of this component increased with probe eccentricity. The contribution of local motion processing to this effect was small compared to that of global processing (experiment 1). Furthermore, global discounting was clearly implicated because these effects persisted even when all the flow in the hemifield containing the probe was removed (experiment 2). Global processing of optic flow information is shown to play a fundamental role in the recovery of object movement during ego movement.
What visual information do we use to guide movement through our environment? Self-movement produces a pattern of motion on the retina, called optic flow. During translation, the direction of movement (locomotor direction) is specified by the point in the flow field from which the motion vectors radiate - the focus of expansion (FoE) [1-3]. If an eye movement is made, however, the FoE no longer specifies locomotor direction [4], but the 'heading' direction can still be judged accurately [5]. Models have been proposed that remove confounding rotational motion due to eye movements by decomposing the retinal flow into its separable translational and rotational components ([6-7] are early examples). An alternative theory is based upon the use of invariants in the retinal flow field [8]. The assumption underpinning all these models (see also [9-11]), and associated psychophysical [5,12,13] and neurophysiological studies [14-16], is that locomotive heading is guided by optic flow. In this paper we challenge that assumption for the control of direction of locomotion on foot. Here we have explored the role of perceived location by recording the walking trajectories of people wearing displacing prism glasses. The results suggest that perceived location, rather than optic or retinal flow, is the predominant cue that guides locomotion on foot.
The use of virtual reality (VR) display systems has escalated over the last 5 yr and may have consequences for those working within vision research. This paper provides a brief review of the literature pertaining to the representation of depth in stereoscopic VR displays. Specific attention is paid to the response of the accommodation system with its cross-links to vergence eye movements, and to the spatial errors that arise when portraying three-dimensional space on a two-dimensional window. It is suggested that these factors prevent large depth intervals of three-dimensional visual space being rendered with integrity through dual two-dimensional arrays.
How do we time hand closure to catch a ball? Binocular disparity and optical looming provide two sources of information about an object's motion in depth, but the relative effectiveness of the two cues depends on ball size. Based on results from a virtual reality ball-catching task, we derive a simple model that uses both cues. The model is sensitive to the relative effectiveness of size and disparity and implicitly switches its response to the cue that specifies the earliest arrival and away from a cue that is lost or below threshold. We demonstrate the model's robustness by predicting the response of participants to some very unusual ball trajectories in a virtual reality task.
f The short-term effects on binocular stabiliiy of wearing a conventional head-mounted display (HMD) to explore a virtual reality environment were examined. Twenty adult subjects (aged 19-29 years) wore a commercially available HMD for 10 min while cycling around a computer generated 3-D world. The twin screen presentations were set to suit the average interpupillary distance ofour subject population, to mimic the conditions of public access virtual reality systems. Subjects were examined before and after exposure to the HMD and there were clear signs of induced binocular stress for a number of the subjects. The implications of introducing such HMDs into the workplace and entertainment environments are discussed. IA recent development in graphical user interfaces for human-computer interaction has been the development of virtual reality (VR) systems. The central principle of such systems is that the user should be able to interact with a computer by using gaze and manipulative gestures within a 3-D computer environment. At the simplest level this can be through the use of a peripheral input device (e.g. a computer mouse) to "move" through a 3-D environment that is pictorially represented on a conventional computer screen. A more innovative and popular approach is to imbed the user within the 3-D VR by using a head mounted display (HMD). In addition to the pictorial depth cues that are presented on a screen representation, the HMD attempts to simulate binocularly overlapped images so that the fusion of disparate images can create the illusion of a three dimensional world. The advantages of the HMD, however, go beyond the provision of stereoscopic depth cues. A 6 degree of freedom tracking device is normally mounted on the HMD so that as the user moves his/her head new visual perspectives can be displayed and the user can scan through 360'-or walk through this new computer world. The HMD can provide the user with an impressive sense of presence and the ability to interact in a more natural way within the VR environment. Specifications of a typical head mounted displayOne of the most commonly used HMD systems is the VPL Eyephone LX (Redwood. CA. USA). This uses an adjustable headband to mount a 3 inch LCD screen in front of each eye. These are viewed through a -(-36D compound lens, created by the use of two -|-18 D Fresnel•MBCO Correspondence lo: VR project. Department ofPsychology. University of Edinburgh, Edinburgh, UK.lenses, and the LCD screens are placed close to the focal length of this lens. Because of the low resolution of such LCD screens (360 x 240 primary colour pixel equivalent to 208 X 139 RGB triads') a semi-opaque lens is placed between the lens and the screen to spatially filter the image. The Eyephone LX does not allow the user to adjust the lateral distance between the eyepieces which is fixed at 65 mm between the optical centres of the two lenses. The viewer can adjust the distance from the eyes to the lenses but the distance is invariably close, usually in the order of 2 cm. Precise mounting of ...
A moving observer needs to be able to estimate the trajectory of other objects moving in the scene. Without the ability to do so, it would be difficult to avoid obstacles or catch a ball. We hypothesized that neural mechanisms sensitive to the patterns of motion generated on the retina during self-movement (optic flow) play a key role in this process, "parsing" motion due to self-movement from that due to object movement. We investigated this "flow parsing" hypothesis by measuring the perceived trajectory of a moving probe placed within a flow field that was consistent with movement of the observer. In the first experiment, the flow field was consistent with an eye rotation; in the second experiment, it was consistent with a lateral translation of the eyes. We manipulated the distance of the probe in both experiments and assessed the consequences. As predicted by the flow parsing hypothesis, manipulating the distance of the probe had differing effects on the perceived trajectory of the probe in the two experiments. The results were consistent with the scene geometry and the type of simulated self-movement. In a third experiment, we explored the contribution of local and global motion processing to the results of the first two experiments. The data suggest that the parsing process involves global motion processing, not just local motion contrast. The findings of this study support a role for optic flow processing in the perception of object movement during self-movement.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.