2003
DOI: 10.1007/3-540-45113-7_27
|View full text |Cite
|
Sign up to set email alerts
|

Text or Pictures? An Eyetracking Study of How People View Digital Video Surrogates

Abstract: One important user-oriented facet of digital video retrieval research involves how to abstract and display digital video surrogates. This study reports on an investigation of digital video results pages that use textual and visual surrogates. Twelve subjects selected relevant video records from results lists containing titles, descriptions, and three keyframes for ten different search tasks. All subjects were eye-tracked to determine where, when, and how long they looked at text and image surrogates. Participa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
50
1
1

Year Published

2006
2006
2014
2014

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 52 publications
(58 citation statements)
references
References 15 publications
6
50
1
1
Order By: Relevance
“…In our 'binned' analysis of Study 1, we found that for surrogates where the text alone did not have perfect judgment accuracy (the 'low-bin'), adding good images increased accuracy by 24%. This result is consistent with results from Loumakis et al [16] that 'high-scent' images can help improve surrogates with 'low-scent' text, and observations from Hughes et al [11] that images are sometimes used to help confirm or refute the textual parts of a surrogate. In a study of social annotations embedded into SERP results, Muralidharan et al [19] noted that when summary snippets were shorter, the social annotations were noticed more.…”
Section: Rq1 -Do Image-augmented Surrogates Help Relevance Decisions?supporting
confidence: 91%
See 1 more Smart Citation
“…In our 'binned' analysis of Study 1, we found that for surrogates where the text alone did not have perfect judgment accuracy (the 'low-bin'), adding good images increased accuracy by 24%. This result is consistent with results from Loumakis et al [16] that 'high-scent' images can help improve surrogates with 'low-scent' text, and observations from Hughes et al [11] that images are sometimes used to help confirm or refute the textual parts of a surrogate. In a study of social annotations embedded into SERP results, Muralidharan et al [19] noted that when summary snippets were shorter, the social annotations were noticed more.…”
Section: Rq1 -Do Image-augmented Surrogates Help Relevance Decisions?supporting
confidence: 91%
“…Al Maqbali et al [18] found that users looked at textual components of an image-augmented surrogate more than the image components, but that salient images did draw users' attention. Hughes et al [11] also found that searchers had more eyegaze fixations on the textual elements of an image-augmented surrogate than on the image, and reported gaze patterns that suggest that participants looked at the text first, and used the image to help confirm or refute the text. Muralidharan et al [19] looked at social annotations integrated into web search results and reported that often participants did not notice small sized images and annotations, but instead focused on URLs and titles in the results.…”
Section: Related Workmentioning
confidence: 99%
“…A prior eye-tracking empirical study looked at digital video surrogates comprised of text and three thumbnail images to represent each document, and found that participants looked at and fixated on text far more than pictures [10]. They used the text as an anchor from which to make judgments about the list of results.…”
Section: Discussionmentioning
confidence: 99%
“…Eye-tracking studies have found that when surrogates include both thumbnails and text, users focus on both elements. [14,20].…”
Section: Vertical Query-sensementioning
confidence: 99%