2016
DOI: 10.1002/asi.23656
|View full text |Cite
|
Sign up to set email alerts
|

Time‐based tags for fiction movies: comparing experts to novices using a video labeling game

Abstract: The cultural heritage sector has embraced social tagging as a way to increase both access to online content and to engage users with their digital collections. In this article, we build on two current lines of research. (a) We use Waisda?, an existing labeling game, to add timebased annotations to content. (b) In this context, we investigate the role of experts in human-based computation (nichesourcing). We report on a small-scale experiment in which we applied Waisda? to content from film archives. We study t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
6
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 61 publications
(80 reference statements)
2
6
0
Order By: Relevance
“…Discussion. Our findings are consistent with previous studies using the same classification method [12,15,19,23]. To begin, the majority of crowdsourced descriptions are at the conceptual level.…”
Section: Characteristics Of Annotationssupporting
confidence: 92%
See 3 more Smart Citations
“…Discussion. Our findings are consistent with previous studies using the same classification method [12,15,19,23]. To begin, the majority of crowdsourced descriptions are at the conceptual level.…”
Section: Characteristics Of Annotationssupporting
confidence: 92%
“…We also have consistent finding with Estrada et al [12], in that the third most frequently used annotation is abstract/What (e.g., "neatness"), as people use abstract terms to describe events or actions in a scene, such as emotions and judgment. Abstract annotations are subjective in nature and are used to ascertain what other users think about a video [23].…”
Section: Characteristics Of Annotationssupporting
confidence: 90%
See 2 more Smart Citations
“…Since the users were not divided based on domain expertise, the tags should be relatively similar in nature-as the data confirm. Previous studies recommended specific tag types such as cinematography, emotions, explanations, and facts in their instructions to users (Estrada et al, 2016). This study did not include recommendations, instead the instructions stated, "A tag, if you are unfamiliar with tagging, should provide some description of the video that would help yourself and/or others find it through search or browsing online."…”
Section: Discussionmentioning
confidence: 99%