Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant's head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs' Pupil in 3D mode, and (iv) Pupil-Labs' Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8-3.1 • increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup. Keywords Head-mounted eye tracking • Wearable eye tracking • Mobile eye tracking • Eye movements • Natural behavior • Data quality Diederick C. Niehorster and Thiago Santini contributed equally and should be considered co-first authors.
Smooth pursuit eye movements provide meaningful insights and information on subject's behavior and health and may, in particular situations, disturb the performance of typical fixation/saccade classification algorithms. Thus, an automatic and efficient algorithm to identify these eye movements is paramount for eye-tracking research involving dynamic stimuli. In this paper, we propose the Bayesian Decision Theory Identification (I-BDT) algorithm, a novel algorithm for ternary classification of eye movements that is able to reliably separate fixations, saccades, and smooth pursuits in an online fashion, even for low-resolution eye trackers. The proposed algorithm is evaluated on four datasets with distinct mixtures of eye movements, including fixations, saccades, as well as straight and circular smooth pursuits; data was collected with a sample rate of 30 Hz from six subjects, totaling 24 evaluation datasets. The algorithm exhibits high and consistent performance across all datasets and movements relative to a manual annotation by a domain expert (recall: µ = 91.42%, σ = 9.52%; precision: µ = 95.60%, σ = 5.29%; specificity µ = 95.41%, σ = 7.02%) and displays a significant improvement when compared to I-VDT, an state-of-the-art algorithm (recall: µ = 87.67%, σ = 14.73%; precision: µ = 89.57%, σ = 8.05%; specificity µ = 92.10%, σ = 11.21%). For the algorithm implementation and annotated datasets, please contact the first author.based on the raw eye-position signal is critical for research and applications involving eye trackers -such as cognitive science and medical research, task assistance (e.g., driving) and marketing applications, and Human Computer Interfaces (HCI).
There is increasing awareness that the perception of art is affected by the way it is presented. In 2018, the Austrian Gallery Belvedere redisplayed its permanent collection. Our multi-disciplinary team seized this opportunity to investigate the viewing behavior of specific artworks both before and after the museum’s rearrangement. In contrast to previous mobile eye tracking (MET) studies in museums, this study benefits from the comparison of two realistic display conditions (without any research interference), an unconstrained study design (working with regular museum visitors), and a large data sample (comprising 259 participants). We employed a mixed-method approach that combined mobile eye tracking, subjective mapping (a drawing task in conjunction with an open interview), and a questionnaire in order to relate gaze patterns to processes of meaning-making. Our results show that the new display made a difference in that it 1) generally increased the viewing times of the artworks; 2) clearly extended the reading times of labels; and 3) deepened visitors’ engagement with the artworks in their exhibition reflections. In contrast, interest in specific artworks and art form preferences proved to be robust and independent of presentation modes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.