2018
DOI: 10.16910/jemr.11.6.6
|View full text |Cite
|
Sign up to set email alerts
|

Automating areas of interest analysis in mobile eye tracking experiments based on machine learning

Abstract: For an in-depth, AOI-based analysis of mobile eye tracking data, a preceding gaze assignment step is inevitable. Current solutions such as manual gaze mapping or marker-based approaches are tedious and not suitable for applications manipulating tangible objects. This makes mobile eye tracking studies with several hours of recording difficult to analyse quantitatively. We introduce a new machine learning-based algorithm, the computational Gaze-Object Mapping (cGOM), that automatically maps gaze data onto respec… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
28
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 33 publications
(28 citation statements)
references
References 12 publications
0
28
0
Order By: Relevance
“…Therefore, increasing the AOI size around objects of interest can result in areas that exceed the size of the actual objects by several orders of magnitude. In our study, we assume that the most accurate depiction of an operator’s gaze behavior can only be achieved using a close contour AOI production method, such as Mask R-CNN and cGOM ( 35 ), that continuously adjusts AOI sizes to the actual object size within the dynamic scene. One advantage of our presented method is that it can be equally used for the analysis of AOI Hits, if traditional metrics are of interest to the researchers.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…Therefore, increasing the AOI size around objects of interest can result in areas that exceed the size of the actual objects by several orders of magnitude. In our study, we assume that the most accurate depiction of an operator’s gaze behavior can only be achieved using a close contour AOI production method, such as Mask R-CNN and cGOM ( 35 ), that continuously adjusts AOI sizes to the actual object size within the dynamic scene. One advantage of our presented method is that it can be equally used for the analysis of AOI Hits, if traditional metrics are of interest to the researchers.…”
Section: Discussionmentioning
confidence: 99%
“…The semantic mapping of fixations for the AOI Hit method was implemented using the automated AOI mapping algorithm cGOM ( 35 ). The algorithm detects and segments pre-trained objects, here the screw and the screwdriver, using the Mask R-CNN network ( 14 ) and, subsequently, determines whether the gaze of each fixation lies within the constraints of the 2D pixel (px) coordinate matrix of the segmented object masks.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Previous methods tackled the automatic analysis of head-mounted eye tracking data in uninstrumented environments [ 7 , 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 ]. Drawbacks of these methods include, e.g., a missing support for real-time applications, and the restriction to a limited number of classes (≤12).…”
Section: Introductionmentioning
confidence: 99%
“…Although some researchers have suggested guidelines in defining AOIs [7,[16][17][18], there is no gold standard for defining AOIs. In addition, although methods that automatically generate AOIs have been put forward [10,[19][20][21][22][23], the dominant approach in eye-tracking studies is to manually define AOIs. Therefore, researchers often make subjective decisions in defining AOIs, causing locations, shapes, and sizes of AOIs to vary even between studies that utilize similar stimuli [10,24].…”
Section: Introductionmentioning
confidence: 99%