Companion Proceedings of the 25th International Conference on Intelligent User Interfaces 2020
DOI: 10.1145/3379336.3381499
|View full text |Cite
|
Sign up to set email alerts
|

The Influence of Input Data Complexity on Crowdsourcing Quality

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…It allows annotators to progress through the annotation study similar to a game-by acquiring specific skills that are required to progress to the next level (Sweetser and Wyeth 2005). Although several works have shown the efficiency of progression in games with a purpose (Madge et al 2019;Kicikoglu et al 2020) and even in crowdsourcing (Tauchmann, Daxenberger, and Mieskes 2020), this does not necessarily benefit individual workers, as less-skilled workers are either filtered out or asked to "train" on additional instances. Moreover, implementing progression poses a substantial burden on researchers due to the inclusion of game-like elements (e.g., skills and levels), or at minimum, the separation of the data according to difficulty and, furthermore, a repeated evaluation and reassignment of workers.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It allows annotators to progress through the annotation study similar to a game-by acquiring specific skills that are required to progress to the next level (Sweetser and Wyeth 2005). Although several works have shown the efficiency of progression in games with a purpose (Madge et al 2019;Kicikoglu et al 2020) and even in crowdsourcing (Tauchmann, Daxenberger, and Mieskes 2020), this does not necessarily benefit individual workers, as less-skilled workers are either filtered out or asked to "train" on additional instances. Moreover, implementing progression poses a substantial burden on researchers due to the inclusion of game-like elements (e.g., skills and levels), or at minimum, the separation of the data according to difficulty and, furthermore, a repeated evaluation and reassignment of workers.…”
Section: Related Workmentioning
confidence: 99%
“…Similar to educational approaches, we rely on estimating the "difficulty" of an instance to generate our curriculum (Taylor 1953;Beinborn, Zesch, and Gurevych 2014;Lee, Schwan, and Meyer 2019). In this work, we investigate an easy-instances-first strategy that has been shown to be a reasonable strategy in previous work (Tauchmann, Daxenberger, and Mieskes 2020); thereby sorting instances in ascending order according to their difficulty. Our C * is thus approximated by the ordered set S = {x 1 , .…”
Section: Approachmentioning
confidence: 99%
“…Similar to educational approaches, we rely on estimating the "difficulty" of an instance to generate our curriculum (Taylor 1953;Beinborn, Zesch, and Gurevych 2014;Lee, Schwan, and Meyer 2019). In this work, we investigate an easy-instances-first strategy which has been shown to be a reasonable strategy in previous work (Tauchmann, Daxenberger, and Mieskes 2020); thereby sorting instances in ascending order according to their difficulty. Our C * is thus approximated by the ordered set…”
Section: Approachmentioning
confidence: 99%
“…Moreover, as annotators are compensated not by the time they spend but rather by the number of annotated instances, they are compelled to work fast to maximize their monetary gain-which can negatively affect annotation quality (Drutsa et al, 2020) or even result in spamming (Hovy et al, 2013). It can also be difficult to find crowdworkers for the task at hand, for instance due to small worker pools for languages other than English (Pavlick et al, 2014;Frommherz and Zarcone, 2021) or because the task requires special qualifications (Tauchmann et al, 2020). Finally, the deployment of crowdsourcing remains ethically questionable due to undervalued payment (Fort et al, 2011;Cohen et al, 2016), privacy breaches, or even psychological harm on crowdworkers (Shmueli et al, 2021).…”
Section: Introductionmentioning
confidence: 99%