In the real world, people are quite accurate in judging distances to locations in the environment, at least for targets resting on the ground plane and distances out to about 20 m. Distance judgments in visually immersive environments are much less accurate. Several studies have now shown that in visually immersive environments, the world appears significantly smaller than intended. This study investigates whether or not the compression in apparent distances is the result of the low-quality computer graphics utilized in previous investigations. Visually directed triangulated walking was used to assess distance judgments in the real world and in three virtual environments with graphical renderings of varying quality.
We carried out three experiments to examine the influence of field of view and binocular viewing restrictions on absolute distance perception in real-world indoor environments. Few of the classical visual cues provide direct information for accurate absolute distance judgments to points in the environment beyond about 2 m from the viewer. Nevertheless, in previous work it has been found that visually directed walking tasks reveal accurate distance estimations in full-cue real-world environments to distances up to 20 m. In contrast, the same tasks in virtual environments produced with head-mounted displays (HMDs) show large compression of distance. Field of view and binocular viewing are common limitations in research with HMDs, and have been rarely studied under full pictorial-cue conditions in the context of distance perception in the real-world. Experiment 1 showed that the view of one's body and feet on the floor was not necessary for accurate distance perception. In experiment 2 we manipulated the horizontal and the vertical field of view along with head rotation and found that a restricted field of view did not affect the accuracy of distance estimations when head movement was allowed. Experiment 3 showed that performance with monocular viewing was equal to that with binocular viewing. These results have implications for the information needed to scale egocentric distance in the real-world and reduce the support for the hypothesis that a limited field of view or imperfections in binocular image presentation are the cause of the underestimation seen with HMDs.
Research has shown that people are able to judge distances accurately in full-cue, real-world environments using visually directed actions. However, in virtual environments viewed with head-mounted display (HMD) systems, there is evidence that people act as though the virtual space is smaller than intended. This is a surprising result given how well people act in real environments. The behavior in the virtual setting may be linked to distortions in the available visual cues or to a person's ability to locomote without vision. Either could result from issues related to added mass, moments of inertia, and restricted field of view in HMDs. This article describes an experiment in which distance judgments based on normal real-world and HMD viewing are compared with judgments based on real-world viewing while wearing two specialized devices. One is a mock HMD, which replicated the mass, moments of inertia, and field of view of the HMD and the other an inertial headband designed to replicate the mass and moments of inertia of the HMD, but constructed to not restrict the field of view of the observer or otherwise feel like wearing a helmet. Distance judgments using the mock HMD showed a statistically significant underestimation relative to the no restriction condition but not of a magnitude sufficient to account for all the distance compression seen in the HMD. Indicated distances with the inertial headband were not significantly smaller than those made with no restrictions.
Several studies from different research groups investigating perception of absolute, egocentric distances in virtual environments have reported a compression of the intended size of the virtual space. One potential explanation for the compression is that inaccuracies and cue conflicts involving stereo viewing conditions in head mounted displays result in an inaccurate absolute scaling of the virtual world. We manipulate stereo viewing conditions in a head mounted display and show the effects of using both measured and fixed inter-pupilary distances, as well as bi-ocular and monocular viewing of graphics, on absolute distance judgments. Our results indicate that the amount of compression of distance judgments is unaffected by these manipulations. The equivalent performance with stereo, bi-ocular, and monocular viewing suggests that the limitations on the presentation of stereo imagery that are inherent in head mounted displays are likely not the source of distance compression reported in previous virtual environment studies.
For humans to effectively interact with their environment, it is important for the visual system to determine the absolute size and distance of objects. Previous experiments performed in full-cue, real-world environments have demonstrated that blind walking to targets serves as an accurate indication of distance perception, up to about 25 m. In contrast, the same task performed in virtual environments (VEs) using head-mounted displays shows significant underestimation in walking. To date, blind walking is the only visually directed action task that has been used to evaluate distance perception in VEs beyond reaching distances. The possible influence of the response measure itself on absolute distance perception in virtual environments is currently an open question. Blind walking involves locomotion and the egocentric updating of the environment with one's own movement. We compared this measure to blind throwing, a task that involves the initiation of a movement directed by vision, but no further interaction within the environment. Both throwing and walking were compressed in the VE but accurate in the real world. We suggest that distance compression found in VEs may be a result of a general perceptual origin rather than specific to the response measure.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.