Proceedings of the ACM SIGKDD Workshop on Human Computation 2010
DOI: 10.1145/1837885.1837890
|View full text |Cite
|
Sign up to set email alerts
|

The anatomy of a large-scale human computation engine

Abstract: In this paper we describe Rabj 1 , an engine designed to simplify collecting human input. We have used Rabj to collect over 2.3 million human judgments to augment data mining, data entry, and curation tasks at Freebase over the course of a year. We illustrate several successful applications that have used Rabj to collect human judgment. We describe how the architecture and design decisions of Rabj are affected by the constraints of content agnosticity, data freshness, latency and visibility. We present work ai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
26
0
7

Year Published

2012
2012
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 55 publications
(33 citation statements)
references
References 9 publications
(10 reference statements)
0
26
0
7
Order By: Relevance
“…For example, a crowd of workers whose work is high-quality can be selected by qualifying tests or previous performance. These good workers are then kept for future tasks by paying an explicit bonus [76] or by forming a worker community around the system [37].…”
Section: Crowd Availabilitymentioning
confidence: 99%
See 2 more Smart Citations
“…For example, a crowd of workers whose work is high-quality can be selected by qualifying tests or previous performance. These good workers are then kept for future tasks by paying an explicit bonus [76] or by forming a worker community around the system [37].…”
Section: Crowd Availabilitymentioning
confidence: 99%
“…37 Victoria Elena cree que el ratón quiere el diente para construir una escalera muy larga para llegar a la luna, que está hecha de queso.…”
unclassified
See 1 more Smart Citation
“…Approaches include task design, especially getting the work done iteratively (e.g. find, fix, verify for correcting documents [8]) and/or rating workers through reputation schemes. Whilst it is possible to simply weed out bad work/workers, a more promising approach is to distinguish between scammers and genuine workers and to enable on the job learning so that genuine workers can improve.…”
Section: Crowdsourcing Literaturementioning
confidence: 99%
“…In [20] the authors thus reached the conclusion that an hourly payment was better (with some verification and time justification procedures).…”
mentioning
confidence: 99%