Proceedings of the 2009 Workshop on the People's Web Meets NLP Collaboratively Constructed Semantic Resources - People's Web '0 2009
DOI: 10.3115/1699765.1699773
|View full text |Cite
|
Sign up to set email alerts
|

Acquiring high quality non-expert knowledge from on-demand workforce

Abstract: Being expensive and time consuming, human knowledge acquisition has consistently been a major bottleneck for solving real problems. In this paper, we present a practical framework for acquiring high quality non-expert knowledge from on-demand workforce using Amazon Mechanical Turk (MTurk). We show how to apply this framework to collect large-scale human knowledge on AOL query classification in a fast and efficient fashion. Based on extensive experiments and analysis, we demonstrate how to detect low-quality la… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
19
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(20 citation statements)
references
References 17 publications
(12 reference statements)
1
19
0
Order By: Relevance
“…All crowdsourcing systems attract spammers, which can be a very serious issue [22,50,38]. However, in a game context we can expect spamming to be much less of an issue because the work is not conducted on a pay-per-annotation basis.…”
Section: Malicious Behaviourmentioning
confidence: 99%
See 2 more Smart Citations
“…All crowdsourcing systems attract spammers, which can be a very serious issue [22,50,38]. However, in a game context we can expect spamming to be much less of an issue because the work is not conducted on a pay-per-annotation basis.…”
Section: Malicious Behaviourmentioning
confidence: 99%
“…Obtaining reliable results from non-experts is also a challenge for other crowdsourcing approaches, and in this context strategies for dealing with the issue have been discussed extensively [39,2,3,22].…”
Section: Annotation Qualitymentioning
confidence: 99%
See 1 more Smart Citation
“…Regarding the money to be paid, there was not a consensus in previous research works. Aker et al (2012a) showed that high payments lead to better results, however Mason and Watts (2010) and Feng et al (2009) argued that higher payments attracted more spammers, thus resulting in a decrease of quality in the job performed. This was confirmed by the experiments proposed in Lloret et al (2013), where the amount of money paid for the same task was increased through small intervals.…”
Section: Crowdsourcing Evaluationmentioning
confidence: 99%
“…One strategy is to have multiple annotators independently agree on the annotation as measured using standard agreement metrics, in the task itself or in a pilot task 2 , or by asking the crowd to validate the acquired annotations in a separate task (a two-stage annotation process), or adjusting the system's notion of trust of particular workers online (Sheng et al, 2008;Feng et al, 2009). Different thresholds can be set to determine correctness of the output with an arbitrarily high probability (von Ahn and Dabbish, 2004;Vickrey et al, 2008;Snow et al, 2008).…”
Section: Annotation Quality*mentioning
confidence: 99%