Proceedings of the ACM International Conference on Object Oriented Programming Systems Languages and Applications 2012
DOI: 10.1145/2384616.2384663
|View full text |Cite
|
Sign up to set email alerts
|

AutoMan

Abstract: Humans can perform many tasks with ease that remain difficult or impossible for computers. Crowdsourcing platforms like Amazon's Mechanical Turk make it possible to harness human-based computational power at an unprecedented scale. However, their utility as a general-purpose computational platform remains limited. The lack of complete automation makes it difficult to orchestrate complex or interrelated tasks. Scheduling more human workers to reduce latency costs real money, and jobs must be monitored and resch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2013
2013
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 71 publications
(12 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…However, by asking a sufficient number of workers to perform the same task independently, we are able to gain the most common responses as the solution and expect a high quality of the correct (i.e., majority) answer. Barowy et al [53] optimize the majority voting approach by introducing an algorithm to estimate the confidence level of the responses that would be acceptable by the requester. Liu et al [54] studied improvements using a quality control mechanism relying on workers' past performances.…”
Section: ) Workflow Controlsmentioning
confidence: 99%
“…However, by asking a sufficient number of workers to perform the same task independently, we are able to gain the most common responses as the solution and expect a high quality of the correct (i.e., majority) answer. Barowy et al [53] optimize the majority voting approach by introducing an algorithm to estimate the confidence level of the responses that would be acceptable by the requester. Liu et al [54] studied improvements using a quality control mechanism relying on workers' past performances.…”
Section: ) Workflow Controlsmentioning
confidence: 99%
“…VOXPL modifications to AUTOMAN AUTOMAN is a modular crowdprogramming framework with integrated automatic scheduling, payment, and quality control for any given multiple-choice question and confidence level [4,5]. VOXPL generalizes and subsumes the AUTOMAN crowdprogramming framework, leveraging its automatic scheduling and pricing algorithms.…”
Section: Related Workmentioning
confidence: 99%
“…Later work takes a more statistical approach: many systems model latent variables like worker skill, worker speed, or task difficulty, or exploit correlations in the labels of similar inputs to predict accurate labels [9,35,29,8,10]. Several systems specifically address adversaries who may "game" the system by changing their behavior, treating the problem as a form of statistical noise rejection [4,31,27]. All of these approaches are designed with labeling in mind and do not provide quality guarantees for estimates.…”
Section: Quality Controlmentioning
confidence: 99%
“…Automan [Barowy et al 2012], an automatic crowd programming system, fits this metaphor well and integrates human computation tasks as function calls in a standard programming language, thereby blurring the lines between CPUs and human processors. Among the main problems with programming the global brain is the fundamental difference between humans and computers in terms of motivational, error, and cognitive diversity [Bernstein et al 2012].…”
Section: Crowdsourcingmentioning
confidence: 99%