2014 5th European Workshop on Visual Information Processing (EUVIP) 2014
DOI: 10.1109/euvip.2014.7018371
|View full text |Cite
|
Sign up to set email alerts
|

Towards automated comparison of eye-tracking recordings in dynamic scenes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…There have been approaches to semi-automatically determine interesting objects and to track them Kübler et al (2014a), however they are not generically applicable. We already mentioned approaches to cluster fixation locations in order to determine ROIs from the data (Heminghous and Duchowski, 2006).…”
Section: String Conversionmentioning
confidence: 99%
“…There have been approaches to semi-automatically determine interesting objects and to track them Kübler et al (2014a), however they are not generically applicable. We already mentioned approaches to cluster fixation locations in order to determine ROIs from the data (Heminghous and Duchowski, 2006).…”
Section: String Conversionmentioning
confidence: 99%
“…We want to extend the general and AOI based statistics calculations, add new calculation or visualization algorithms and make the existing ones more interactive and transparent. A special focus will be given the analysis and processing of saccadic eye movements as well as to the automated annotation of AOIs for dynamic scenarios (Kübler et al, 2014) and non-elliptical AOIs.…”
Section: Discussionmentioning
confidence: 99%
“…We plan on including further automated scanpath comparison metrics, such as MultiMatch (Dewhurst et al, 2012) or SubsMatch (Kübler et al, 2014).…”
Section: Discussionmentioning
confidence: 99%
“…Thumbnails of size 100 x 100 pixels around each eye gaze point are extracted from multiple video recordings and clustered accordingly. Automatic annotation using scale-invariant feature transform were also used to detect distinct features in videos [64]. Numerous other methods of labelling eye gaze samples based on the underlying context are discussed in [70,71,92,98].…”
Section: Data Abstraction -Symbolic Approximationmentioning
confidence: 99%