Currently an increasing number of head mounted displays (HMD) for virtual and augmented reality (VR/AR) are equipped with integrated eye trackers. Use cases of these integrated eye trackers include rendering optimization and gaze-based user interaction. In addition, visual attention in VR and AR is interesting for applied research based on eye tracking in cognitive or educational sciences for example. While some research toolkits for VR already exist, only a few target AR scenarios. In this work, we present an open-source eye tracking toolkit for reliable gaze data acquisition in AR based on Unity 3D and the Microsoft HoloLens 2, as well as an R package for seamless data analysis. Furthermore, we evaluate the spatial accuracy and precision of the integrated eye tracker for fixation targets with different distances and angles to the user (n=21). On average, we found that gaze estimates are reported with an angular accuracy of 0.83 degrees and a precision of 0.27 degrees while the user is resting, which is on par with state-of-the-art mobile eye trackers.
Remote eye tracking has become an important tool for the online analysis of learning processes. Mobile eye trackers can even extend the range of opportunities (in comparison to stationary eye trackers) to real settings, such as classrooms or experimental lab courses. However, the complex and sometimes manual analysis of mobile eye-tracking data often hinders the realization of extensive studies, as this is a very time-consuming process and usually not feasible for real-world situations in which participants move or manipulate objects. In this work, we explore the opportunities to use object recognition models to assign mobile eye-tracking data for real objects during an authentic students’ lab course. In a comparison of three different Convolutional Neural Networks (CNN), a Faster Region-Based-CNN, you only look once (YOLO) v3, and YOLO v4, we found that YOLO v4, together with an optical flow estimation, provides the fastest results with the highest accuracy for object detection in this setting. The automatic assignment of the gaze data to real objects simplifies the time-consuming analysis of mobile eye-tracking data and offers an opportunity for real-time system responses to the user’s gaze. Additionally, we identify and discuss several problems in using object detection for mobile eye-tracking data that need to be considered.
Experiments are of central importance for the natural sciences in general and in science education in particular, but the learning gains that teachers expect from them often fall short of expectations. This is especially true for student experiments, which are often conducted in dyads or small groups. In such a collaborative form of experimentation, the successful execution of the experiment and thus the achievement of the goals of the learning activity also depends on the cooperation of the students, so that a lack of learning success can also be caused by insufficient collaboration. In this study, mobile eye trackers were used with N = 40 students to record gaze behavior during collaborative experimentation in the context of geometrical optics in order to investigate the influence of Joint Visual Attention (JVA) on learning success during experimentation. A significant relationship between JVA and learning gains was found for the setup phase with respect to experiment setup. The results show that especially during the setup of the experiment a successful collaboration of the experiment partners is of high importance for later successful execution of the experiment and thus support measures in this phase, such as the targeted directing of the attention of both experiment partners, could lead to an increase of the learning gain.
Learning through embodiment is a promising concept, potentially capable to remove many layers of abstraction hindering the learning process. Walk the Graph, our HoloLens2-based AR application, provides an inquiry-based learning setting for understanding graphs through the full-body movement of the user. In this paper, as part of our ongoing work to build an AI framework to quantify and predict the learning gain of the user, we examine the predictive potential of gaze data collected during the app usage. To classify users into groups with different learning gains, we construct a map of areas of interest (AOI) based on the gaze data itself. Subsequently, using a sliding window approach, we extract engineered features from the collected in-app as well as gaze data. Our experimental results have shown that a Support Vector Machine with selected features achieved the highest F1 score (0.658; baseline: 0.251) compared to other approaches including a K-Nearest Neighbor and a Random Forest Classifier although in each of the cases the lion's share of the predictive power is indeed provided by the gaze-based features. CCS CONCEPTS• Applied computing → Interactive learning environments.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.