2021
DOI: 10.16910/jemr.14.1.5
|View full text |Cite
|
Sign up to set email alerts
|

Object-gaze distance: Quantifying near-peripheral gaze behavior in real-world applications

Abstract: Eye tracking (ET) has shown to reveal the wearer’s cognitive processes using the measurement of the central point of foveal vision. However, traditional ET evaluation methods have not been able to take into account the wearers’ use of the peripheral field of vision. We propose an algorithmic enhancement to a state-of-the-art ET analysis method, the Object-Gaze Distance (OGD), which additionally allows the quantification of near-peripheral gaze behavior in complex real-world environments. The algorithm uses mac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 35 publications
1
8
0
Order By: Relevance
“…Higher noise distortions affect the ability of the HMM to accurately detect the correct action and lead to overall lower performance since it becomes more likely that an OOI Hit can be falsely detected as either a "no Hit," or as an OOI Hit on a different OOI within proximity. Conversely, as the OGD feature does not rely on binary gaze-on-target logic, it shows remarkably higher robustness to noise, confirming suggestions by Wang et al (2021) that the quantification of peripheral gaze information can be used effectively for EAR.…”
Section: Discussionsupporting
confidence: 64%
See 3 more Smart Citations
“…Higher noise distortions affect the ability of the HMM to accurately detect the correct action and lead to overall lower performance since it becomes more likely that an OOI Hit can be falsely detected as either a "no Hit," or as an OOI Hit on a different OOI within proximity. Conversely, as the OGD feature does not rely on binary gaze-on-target logic, it shows remarkably higher robustness to noise, confirming suggestions by Wang et al (2021) that the quantification of peripheral gaze information can be used effectively for EAR.…”
Section: Discussionsupporting
confidence: 64%
“…extracted and transformed for HMM input. In this paper, we investigated the action recognition performance using two different gaze features, which were extracted as follows: The trained model and the gaze coordinates of each recording are concatenated using the cGOM (see Wolf et al 2018) and the OGD (see Wang et al 2021) algorithm. The cGOM algorithm matches the gaze point with the detected OOIs to create a list of OOI Hits.…”
Section: Gaze-based Action Recognitionmentioning
confidence: 99%
See 2 more Smart Citations
“…Furthermore, annotators are commonly limited to gaze point-based object mapping, which reduces the human gaze from concentric fields with varying visual acuity, to the center point of the foveal [35,42]. Consequently, conventional fixation and saccade-based measures may only be mapped to a single object at a time [36,39]. In recent years, advances in image processing and machine learning (ML) have produced automated mapping algorithms, such as cGOM [41] and Deep-SAGA [12].…”
Section: Introductionmentioning
confidence: 99%