The intent of this paper is to provide an introduction into the bourgeoning field of eye tracking in Virtual Reality (VR). VR itself is an emerging technology on the consumer market, which will create many new opportunities in research. It offers a lab environment with high immersion and close alignment with reality. An experiment which is using VR takes place in a highly controlled environment and allows for a more in-depth amount of information to be gathered about the actions of a subject. Techniques for eye tracking were introduced more than a century ago and are now an established technique in psychological experiments, yet recent development makes it versatile and affordable. In combination, these two techniques allow unprecedented monitoring and control of human behavior in semi-realistic conditions. This paper will explore the methods and tools which can be applied in the implementation of experiments using eye tracking in VR following the example of one case study. Accompanying the technical descriptions, we present research that displays the effectiveness of the technology and show what kind of results can be obtained when using eye tracking in VR. It is meant to guide the reader through the process of bringing VR in combination with eye tracking into the lab and to inspire ideas for new experiments.
To become acquainted with large-scale environments such as cities people combine direct experience and indirect sources such as maps. To ascertain which type of spatial knowledge is acquired by which source is difficult to evaluate. Using virtual reality enables the possibility to investigate whether knowledge is learned by direct experience or the use of a map differentially. Therefore, we designed a large virtual city, comprised of over 200 houses, and evaluated spatial knowledge acquisition after city exploration with an interactive map following one and three 30-min exploration sessions. We tested subjects’ knowledge of the orientation of houses facing directions toward cardinal north, of orientations of houses facing directions relative to each other and pointing from one house to another. Our results revealed that increased familiarity after extended exploration with the map improved task accuracy. Further, it revealed task differences, caused mainly by a better accuracy in the relative orientation task than the pointing task. Time for cognitive reasoning improved overall task accuracy. Learning with our VR city map revealed an absence of distance effect, an alignment effect of tested house orientation toward map north and an angular difference effect between tested stimuli. Self-reported knowledge of cardinal directions learned in the real environment was positively correlated with task accuracy testing houses orientations toward cardinal north. Overall, our results suggest that participants learned spatial information that is directly available in the interactive map, while a spatial task that needed integration of learned knowledge stayed at lower accuracy levels.
Investigating spatial knowledge acquisition in virtual environments allows studying different sources of information under controlled conditions. Therefore, we built a virtual environment in the style of a European village and investigated spatial knowledge acquisition by experience in the immersive virtual environment and compared it to using an interactive map of the same environment. The environment was well explored, with both exploration sources covering the whole village area. We tested knowledge of cardinal directions, building-to-building orientation, and judgment of direction between buildings in a pointing task. The judgment of directions was more accurate after exploration of the virtual environment than after map exploration. The opposite results were observed for knowledge of cardinal directions and relative orientation between buildings. Time for cognitive reasoning improved task accuracies after both exploration sources. Further, an alignment effect toward the north was only visible after map exploration. Taken together, our results suggest that the source of spatial exploration differentially influenced spatial knowledge acquisition.
Abstract. Snow-layer segmentation and classification is an essential diagnostic task for a wide variety of cryospheric applications. The SnowMicroPen (SMP) measures the snowpack's penetration force at submillimetre resolution against the snow depth. The resulting depth-force profile can be parameterized for density and specific surface area. However, no information on traditional snow types is currently extracted automatically. The labeling of snow types is a time-intensive task that requires practice and becomes infeasible for large datasets. Previous work showed that automated segmentation and classification is in theory possible, but can either not be applied to data straight from the field or needs additional time-costly information, such as from classified snow pits. To address this gap, we evaluate how well machine learning models can automatically segment and classify SMP profiles. We trained fourteen different models, among them semi-supervised models and artificial neural networks (ANNs), on the MOSAiC SMP dataset, a large collection of snow profiles on Arctic sea ice. We found that SMP profiles can be successfully segmented and classified into snow classes, based solely on the SMP's signal. The model comparison provided in this study enables practitioners to choose a model that is suitable for their task and dataset. The findings presented will facilitate and accelerate snow type identification through SMP profiles. Overall, snowdragon creates a link between traditional snow classification and high-resolution force-depth profiles. With such a tool, traditional snow profile observations can be compared to SMP profiles.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.