Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management 2014
DOI: 10.1145/2661829.2661946
|View full text |Cite
|
Sign up to set email alerts
|

Competitive Game Designs for Improving the Cost Effectiveness of Crowdsourcing

Abstract: Crowd based online work is leveraged in a variety of applications such as semantic annotation of images, translation of texts in foreign languages, and labeling of training data for machine learning models. However, annotating large amounts of data through crowdsourcing can be slow and costly. In order to improve both cost and time efficiency of crowdsourcing we examine alternative reward mechanisms compared to the "Pay-per-HIT" scheme commonly used in platforms such as Amazon Mechanical Turk. To this end, we … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
19
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 28 publications
(19 citation statements)
references
References 21 publications
0
19
0
Order By: Relevance
“…• Improving Effectiveness. Several optimization techniques have been introduced in prior works in order to increase the throughput of crowdworkers, maximize the cost-benefit ratio of deploying crowdsourced microtasks [45,46], and improving the overall effectiveness of the microtask crowdsourcing model. Gamification has been shown to improve worker retention and throughput of tasks [12].…”
Section: Existing Platforms Demand Workarounds -Current Solutionsmentioning
confidence: 99%
“…• Improving Effectiveness. Several optimization techniques have been introduced in prior works in order to increase the throughput of crowdworkers, maximize the cost-benefit ratio of deploying crowdsourced microtasks [45,46], and improving the overall effectiveness of the microtask crowdsourcing model. Gamification has been shown to improve worker retention and throughput of tasks [12].…”
Section: Existing Platforms Demand Workarounds -Current Solutionsmentioning
confidence: 99%
“…A second group of empirical work focused on bespoke experimental setups for crowdsourcing. For example, Reference [60] looked at the effect of varying monetary schemes and information policies in individual contests, while Reference [61] explored the same problem alongside team formation strategies. Both papers bear similarities to our scenario, in which contestants strategically decide whether they continue to take on more tasks or leave Wordsmith.…”
Section: Contest-basedmentioning
confidence: 99%
“…A worker would always seek to maximise their expected utility given the number and value of prizes; the number of contestants; and the value of their efforts. In our experiments, it was possible for workers to view their ranked position in real-time with respect to their closest contenders using a k-neighbours leaderboard, as presented in the medium information policy contest strategy by Reference [60]. A worker far outside the reward spread might inadvertently decide to exit the contest to avoid further loss of utility.…”
Section: Task Elementsmentioning
confidence: 99%
“…To make crowdsourcing solutions scale to large amounts of data, it is key to design solutions that will retain crowd workers longer in the crowdsourcing platform [13] and prioritize work execution over the crowd.…”
Section: Crowdsourcing Efficiencymentioning
confidence: 99%