2009
DOI: 10.1163/156856809789476065
|View full text |Cite
|
Sign up to set email alerts
|

Efficient visual coding and the predictability of eye movements on natural movies

Abstract: We deal with the analysis of eye movements made on natural movies in free-viewing conditions. Saccades are detected and used to label two classes of movie patches as attended and non-attended. Machine learning techniques are then used to determine how well the two classes can be separated, i.e. how predictable saccade targets are. Although very simple saliency measures are used and then averaged to obtain just one average value per scale, the two classes can be separated with an ROC score of around 0.7, which … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
39
0

Year Published

2010
2010
2014
2014

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 35 publications
(41 citation statements)
references
References 26 publications
2
39
0
Order By: Relevance
“…Thus, invariant K with an ROC score of 0.68 is best, followed by S (AUC of 0.66), whereas the worst performing is H with an AUC of 0.64. Similar results that showed this ranking were published in [10] on a substantially different problem: there we were predicting gaze behaviour of new viewers on videos that have already been "seen" (i.e. learned on) by the classifier, as opposed to predicting eye movements on new videos.…”
Section: Quantitative Analysissupporting
confidence: 67%
See 1 more Smart Citation
“…Thus, invariant K with an ROC score of 0.68 is best, followed by S (AUC of 0.66), whereas the worst performing is H with an AUC of 0.64. Similar results that showed this ranking were published in [10] on a substantially different problem: there we were predicting gaze behaviour of new viewers on videos that have already been "seen" (i.e. learned on) by the classifier, as opposed to predicting eye movements on new videos.…”
Section: Quantitative Analysissupporting
confidence: 67%
“…In this paper, we propose a rather simplistic model of bottom-up saliency for dynamic scenes with the aim to keep the number of assumptions (and, implicitly, the number of free parameters) to a minimum. This model is also related to the neurobiological principle of efficient coding [10]. To test our model, we evaluate how well it predicts human eye movements on naturalistic videos both in absolute terms and in comparison with more complex, state-of-the-art saliency models.…”
Section: Introductionmentioning
confidence: 99%
“…fixated movie patches with a set of randomly collected movie patches. This analysis was done analogously to the analysis carried out for scene-fixation studies, in which a gaze prediction for real-world scenes is sought (see, e.g., Carmi & Itti, 2006;Parkhurst & Niebur, 2003;Reinagel & Zador, 1999;Tatler et al, 2006;Vig, Dorr, & Barth, 2009). They typically used (static) images of real-world scenes while observers performed free viewing.…”
Section: Justus-liebig-universität Giessen Germanymentioning
confidence: 99%
“…We were able to show that on novel test stimuli, subjects who had received such information performed better than subjects who had not seen the expert's eye movements during training, and that the gaze visualization technique presented here facilitated learning better than a simple gaze display (yellow gaze marker). In principle, any visualization technique that reduces the relative visibility of those regions not attended by the expert might have a similar effect; our choice for this particular technique was motivated by our work on eye movement prediction [Dorr et al 2008;Vig et al 2009], which shows that spectral energy is a good predictor for eye movements. Ultimately, we intend to use similar techniques in a gaze-contingent fashion in order to guide the gaze of an observer ].…”
Section: Resultsmentioning
confidence: 99%