Virtual reality systems provide a convenient means of studying human cognition and performance on a wide range of tasks for which real-world testing is cost prohibitive or difficult to control. For the results of such studies to be valid, it is important to ensure that aspects of the virtual experience do not alter a participant's behavior or performance on the experimental tasks. This can be particularly difficult when using novel locomotion interfaces that require training. Training procedures should not be completed until movement tasks can be performed at a high level of ability and they do not interfere with concurrent cognitive tasks.A study is described in which subjects were trained to locomote in the Virtusphere, an interface resembling a "human-sized hamster ball." The effectiveness of training is discussed in terms of both movement abilities and performance on a concurrent cognitive task. Movement performance was tracked as subjects learned to travel through a virtual environment. Additionally, subjects simultaneously completed cognitive tasks in a dual-task selectiveinterference paradigm. Results showed very rapid improvement on movement measures, including distance traveled and the ratio of collisions to distance traveled, with performance improvement becoming gradual within a few minutes. However, results also highlight persistent problems with concurrent spatial memory tasks, indicating that the training is not really done when performance on the movement metrics levels off.
We conducted behavioral experiments on visual, auditory, and motor contributions to the human representation of space in virtual reality environments using an ‘impossible-worlds paradigm’. The experiments were run with an omnidirectional locomotion input device, the ‘Virtusphere’, which is a rotatable 10-foot hollow sphere that allows a subject inside to walk in any direction for any distance, while immersed in a virtual environment. Both the rotation of the sphere and the movement of a subject’s head were tracked to process the subject’s view within the virtual environment presented on a head-mounted display. Auditory features were dynamically processed in order to exactly align sound sources and visual objects. Using this experimental setup the subjects were presented with ‘impossible worlds’, i.e., virtual environments with geometrical and topological properties, which are physically not possible. In previous experiments we have shown that subjects are able to navigate inside these impossible worlds (Zetzsche et al., 2009), despite the fact that different interpretations of their spatial structure are in conflict, since there is no single (physically plausible) interpretation accounting for all sensory perceptions of the subjects.In the present study we manipulated these physically ‘impossible’ properties either in the visual or in the auditory domain (so that each modality supports one of the possible interpretations) and assessed how these manipulations affected the subjects’ internal representations of space. We discuss our results with respect to auditory, visual, and motor contributions to the internal spatial representation, the interaction of modalities, and the implication on the notion of motor action as a linking element between the senses.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.