2013 IEEE Conference on Computer Vision and Pattern Recognition 2013
DOI: 10.1109/cvpr.2013.101
|View full text |Cite
|
Sign up to set email alerts
|

Studying Relationships between Human Gaze, Description, and Computer Vision

Abstract: We posit that user behavior during natural viewing of images contains an abundance of information about the content of images as well as information related to user intent and user defined content importance. In this paper, we conduct experiments to better understand the relationship between images, the eye movements people make while viewing images, and how people construct natural language to describe images. We explore these relationships in the context of two commonly used computer vision datasets. We then… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
62
0
2

Year Published

2014
2014
2022
2022

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 71 publications
(65 citation statements)
references
References 30 publications
1
62
0
2
Order By: Relevance
“…Yun et al [52] collect eye movement data for a 1,000 image subset of Pascal VOC 2008; three observers performed a three second free-viewing task. This data is then used to to re-rank the output of an object class detector [15] on test images.…”
Section: Related Workmentioning
confidence: 99%
“…Yun et al [52] collect eye movement data for a 1,000 image subset of Pascal VOC 2008; three observers performed a three second free-viewing task. This data is then used to to re-rank the output of an object class detector [15] on test images.…”
Section: Related Workmentioning
confidence: 99%
“…More recently, several algorithmic approaches have been developed that leverage advances in computer vision techniques to support the annotation process (De Beugher, 2012;Toyama, 2012;Yun, 2013). The vision algorithms, such as SIFT and SURF, rely on object dictionaries that can be used to match images in the eye-tracking data (Bay, 2006;Lowe, 2004).…”
Section: Algorithmicmentioning
confidence: 99%
“…For example, egocentric videos have been used to reconstruct 3D social gaze to analyze human-human interaction [15], and eye gaze information has been incorporated into first-person activity recognition [6]. The use of gaze information is also becoming more and more important in computer vision tasks such as object and scene recognition [11,20].…”
Section: Introductionmentioning
confidence: 99%