2010
DOI: 10.1609/aaai.v24i1.7760
|View full text |Cite
|
Sign up to set email alerts
|

Decision-Theoretic Control of Crowd-Sourced Workflows

Abstract: Crowd-sourcing is a recent framework in which human intelligence tasks are outsourced to a crowd of unknown people ("workers") as an open call (e.g., on Amazon's Mechanical Turk). Crowd-sourcing has become immensely popular with hoards of employers ("requesters"), who use it to solve a wide variety of jobs, such as dictation transcription, content screening, etc. In order to achieve quality results, requesters often subdivide a large task into a chain of bite-sized subtasks that are combined into a complex, it… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(26 citation statements)
references
References 13 publications
0
24
0
Order By: Relevance
“…This gave us a list of 266,101 authors, along with the number of publications each has produced. Then, we randomly sampled 400 authors 11 and generated the corresponding number of synthetic papers for each author. After that, we constructed the citation graph using the rich-get-richer model [13].…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…This gave us a list of 266,101 authors, along with the number of publications each has produced. Then, we randomly sampled 400 authors 11 and generated the corresponding number of synthetic papers for each author. After that, we constructed the citation graph using the rich-get-richer model [13].…”
Section: Methodsmentioning
confidence: 99%
“…For these communities, we define the bootstrapping process as a decision-theoretic optimization problem. Previous research has shown decisiontheoretic optimization is useful in similar social computing contexts such as crowdsourcing [11]. Applying the decisiontheoretic framework to model the bootstrapping problem allows us to estimate the utility of different operations and find a set of operations that are near optimal for the community.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Furthermore, we compared the obtained accuracy of our judgment rule with that of majority voting. @ In the field of machine-learning, several useful techniques to control qualities of crowdsourced tasks have been proposed and more elaborated machine-learning based methods for label aggregation exist (Dai, Mausam, and Weld 2010;Whitehill et al 2009). However, to estimate the ability of workers, they require each worker to do many tasks.…”
Section: Experiments For Reward Plansmentioning
confidence: 99%
“…By themselves, traditional information retrieval techniques are insufficient for our human-assisted retrieval task. On the other hand, existing crowd-powered systems, including Soylent (Bernstein et al 2010), Clowder (Dai, Mausam, and Weld 2010), Turkit (Little et al 2009), and CrowdPlan (Law and Zhang 2011) do not address the problem of improving information retrieval. Of these, perhaps the most similar is the work on CrowdPlan (Law and Zhang 2011), where human workers assist in breaking down a high level goal into smaller ones, and the operations performed by the human workers are similar to those proposed here.…”
Section: Introductionmentioning
confidence: 99%