2014
DOI: 10.1002/asi.23292
|View full text |Cite
|
Sign up to set email alerts
|

The impact of image descriptions on user tagging behavior: A study of the nature and functionality of crowdsourced tags

Abstract: Crowdsourcing has been emerging to harvest social wisdom from thousands of volunteers to perform series of tasks online. However, little research has been devoted to exploring the impact of various factors such as the content of a resource or crowdsourcing interface design to user tagging behavior. While images' titles and descriptions are frequently available in image digital libraries, it is not clear whether they should be displayed to crowdworkers engaged in tagging. This paper focuses on offering an insig… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
14
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 36 publications
1
14
0
Order By: Relevance
“…If participants used social information as a mere reference to the performance of others with respect to information quantity, there would be no increase in the diversity of information generated, as they would ignore positional information of the digital footprints. A similar finding is reported in a crowdsourcing study where participants increased both diversity and accuracy of the tags created when presented with a description of the image (Lin, Trattner, Brusilovsky, & He, ). Therefore, in the context of citizen science, simply displaying the tag locations created by previous participants provides an effective means for enhancing engagement in physical activity.…”
Section: Discussionsupporting
confidence: 83%
“…If participants used social information as a mere reference to the performance of others with respect to information quantity, there would be no increase in the diversity of information generated, as they would ignore positional information of the digital footprints. A similar finding is reported in a crowdsourcing study where participants increased both diversity and accuracy of the tags created when presented with a description of the image (Lin, Trattner, Brusilovsky, & He, ). Therefore, in the context of citizen science, simply displaying the tag locations created by previous participants provides an effective means for enhancing engagement in physical activity.…”
Section: Discussionsupporting
confidence: 83%
“…When collaborating as a group in an online project, diversity in experience has been shown to increase productivity and member retention (Chen, Ren, & Riedl, ; Arazy et al, ). In image tagging for digital libraries and other collections, providing a description along with the image can affect diversity and specificity of tags (Lin, Trattner, Brusilovsky, & He, ). In social media participation, lack of attention has been correlated with a decrease in level of contribution (Wu, Wilkinson, & Huberman, ), whereas in the context of content creation, productivity has been linked to the level of attention received (Huberman, Romero, & Wu, ).…”
Section: Introductionmentioning
confidence: 99%
“…Amazon Mechanical Turk is a crowdsourcing marketplace for online tasks widely used for experimental research in various fields, including information science (Lin, Trattner, Brusilovsky, & He, 2015) and human-computer interaction (Komarov, Reinecke, & Gajos, 2013). Amazon Mechanical Turk is a crowdsourcing marketplace for online tasks widely used for experimental research in various fields, including information science (Lin, Trattner, Brusilovsky, & He, 2015) and human-computer interaction (Komarov, Reinecke, & Gajos, 2013).…”
Section: Participants and Reward Mechanismmentioning
confidence: 99%
“…We recruited participants via Amazon Mechanical Turk and limited participation to U.S. users with a record of at least 100 tasks at an approval rate above 99%. Amazon Mechanical Turk is a crowdsourcing marketplace for online tasks widely used for experimental research in various fields, including information science (Lin, Trattner, Brusilovsky, & He, 2015) and human-computer interaction (Komarov, Reinecke, & Gajos, 2013). We relied on U.S. participants to increase the validity of the study and make it reflective of what participants may encounter throughout their saving careers.…”
Section: Participants and Reward Mechanismmentioning
confidence: 99%