The intent of this paper is to provide an introduction into the bourgeoning field of eye tracking in Virtual Reality (VR). VR itself is an emerging technology on the consumer market, which will create many new opportunities in research. It offers a lab environment with high immersion and close alignment with reality. An experiment which is using VR takes place in a highly controlled environment and allows for a more in-depth amount of information to be gathered about the actions of a subject. Techniques for eye tracking were introduced more than a century ago and are now an established technique in psychological experiments, yet recent development makes it versatile and affordable. In combination, these two techniques allow unprecedented monitoring and control of human behavior in semi-realistic conditions. This paper will explore the methods and tools which can be applied in the implementation of experiments using eye tracking in VR following the example of one case study. Accompanying the technical descriptions, we present research that displays the effectiveness of the technology and show what kind of results can be obtained when using eye tracking in VR. It is meant to guide the reader through the process of bringing VR in combination with eye tracking into the lab and to inspire ideas for new experiments.
HighlightsWe investigate learning of sensorimotor contingencies by sensory augmentation.The sensory device maps information of magnetic north to vibrotactile stimulation.Active training with the device leads to marked changes in perception of space.The device facilitates navigation and alters navigational strategies.The device gives subjects a strong feeling of security and of “never get lost”.
Theories of embodied cognition propose that perception is shaped by sensory stimuli and by the actions of the organism. Following sensorimotor contingency theory, the mastery of lawful relations between own behavior and resulting changes in sensory signals, called sensorimotor contingencies, is constitutive of conscious perception. Sensorimotor contingency theory predicts that, after training, knowledge relating to new sensorimotor contingencies develops, leading to changes in the activation of sensorimotor systems, and concomitant changes in perception. In the present study, we spell out this hypothesis in detail and investigate whether it is possible to learn new sensorimotor contingencies by sensory augmentation. Specifically, we designed an fMRI compatible sensory augmentation device, the feelSpace belt, which gives orientation information about the direction of magnetic north via vibrotactile stimulation on the waist of participants. In a longitudinal study, participants trained with this belt for seven weeks in natural environment. Our EEG results indicate that training with the belt leads to changes in sleep architecture early in the training phase, compatible with the consolidation of procedural learning as well as increased sensorimotor processing and motor programming. The fMRI results suggest that training entails activity in sensory as well as higher motor centers and brain areas known to be involved in navigation. These neural changes are accompanied with changes in how space and the belt signal are perceived, as well as with increased trust in navigational ability. Thus, our data on physiological processes and subjective experiences are compatible with the hypothesis that new sensorimotor contingencies can be acquired using sensory augmentation.
Many eye-tracking studies investigate visual behavior with a focus on image features and the semantic content of a scene. A wealth of results on these aspects is available, and our understanding of the decision process where to look has reached a mature stage. However, the temporal aspect, whether to stay and further scrutinize a region (exploitation) or to move on and explore image regions that were yet not in the focus of attention (exploration) is less well understood. Here, we investigate the trade-off between these two processes across stimuli with varying properties and sizes. In a free viewing task, we examined gaze parameters in humans, involving the central tendency, entropy, saccadic amplitudes, number of fixations and duration of fixations. The results revealed that the central tendency and entropy scaled with stimulus size. The mean saccadic amplitudes showed a linear increase that originated from an interaction between the distribution of saccades and the spatial bias. Further, larger images led to spatially more extensive sampling as indicated by a higher number of fixations at the expense of reduced fixation durations. These results demonstrate a profound shift from exploitation to exploration as an adaptation of main gaze parameters with increasing image size.
Theories of Enactivism propose an action-oriented approach to understand human cognition. So far, however, empirical evidence supporting these theories has been sparse. Here, we investigate whether spatial navigation based on allocentric reference frames that are independent of the observer's physical body can be understood within an action-oriented approach. Therefore, we performed three experiments testing the knowledge of the absolute orientation of houses and streets towards north, the relative orientation of two houses and two streets, respectively, and the location of houses towards each other in a pointing task. Our results demonstrate that under time pressure, the relative orientation of two houses can be retrieved more accurately than the absolute orientation of single houses. With infinite time for cognitive reasoning, the performance of the task using house stimuli increased greatly for the absolute orientation and surpassed the slightly improved performance in the relative orientation task. In contrast, with streets as stimuli participants performed under time pressure better in the absolute orientation task. Overall, pointing from one house to another house yielded the best performance. This suggests, first, that orientation and location information about houses are primarily coded in house-to-house relations, whereas cardinal information is deduced via cognitive reasoning. Second, orientation information for streets is preferentially coded in absolute orientations. Thus, our results suggest that spatial information about house and street orientation is coded differently and that house orientation and location is primarily learned in an action-oriented way, which is in line with an enactive framework for human cognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.