The sense of presence is defined as a subjective feeling of being situated in an environment and occupying a location therein. The sense of presence is a defining feature of virtual environments. In two experiments, we aimed at investigating the relative contribution of motion parallax and stereopsis to the sense of presence, using two versions of the classic pit room paradigm in virtual reality. In Experiment 1, participants were asked to cross a deep abyss between two platforms on a narrow plank. Participants completed the task under three experimental conditions: (1) when the lateral component of motion parallax was disabled, (2) when stereopsis was disabled, and (3) when both stereopsis and motion parallax were available. As a subjective measure of presence, participants completed a presence questionnaire after each condition. Additionally, electrodermal activity (EDA) was recorded as a measure of anxiety. In Experiment 1, EDA responses were significantly higher with restricted motion parallax as compared to the other two conditions. However, no difference was observed in terms of the subjective presence scores across the three conditions. To test whether these results were due to the nature of the environment, participants in Experiment 2 experienced a slightly less stressful environment, where they were asked to stand on a ledge and drop virtual balls to specified targets into the abyss. The same experimental manipulations were used as in Experiment 1. Again, the EDA responses were significantly higherwhen motion parallax was impaired as compared to when stereopsis was disabled. The results of the presence questionnaire revealed a reduced sense of presence with impaired motion parallax compared to the normal viewing condition. Across the two experiments, our results unexpectedly demonstrate that presence in the virtual environments is not necessarily linked to EDA responses elicited by affective situations as has been implied by earlier studies.
An essential difference between pictorial space displayed as paintings, photographs, or computer screens, and the visual space experienced in the real world is that the observer has a defined location, and thus valid information about distance and direction of objects, in the latter but not in the former. Thus egocentric information should be more reliable in visual space, whereas allocentric information should be more reliable in pictorial space. The majority of studies relied on pictorial representations (images on a computer screen), leaving it unclear whether the same coding mechanisms apply in visual space. Using a memory-guided reaching task in virtual reality, we investigated allocentric coding in both visual space (on a table in virtual reality) and pictorial space (on a monitor that is on the table in virtual reality). Our results suggest that the brain uses allocentric information to represent objects in both pictorial and visual space. Contrary to our hypothesis, the influence of allocentric cues was stronger in visual space than in pictorial space, also after controlling for retinal stimulus size, confounding allocentric cues, and differences in presentation depth. We discuss possible reasons for stronger allocentric coding in visual than in pictorial space.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.