Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data 2015
DOI: 10.1145/2723372.2749430
|View full text |Cite
|
Sign up to set email alerts
|

Qasca

Abstract: A crowdsourcing system, such as the Amazon Mechanical Turk (AMT), provides a platform for a large number of questions to be answered by Internet workers. Such systems have been shown to be useful to solve problems that are difficult for computers, including entity resolution, sentiment analysis, and image recognition. In this paper, we investigate the online task assignment problem: Given a pool of n questions, which of the k questions should be assigned to a worker? A poor assignment may not only waste time a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 161 publications
(3 citation statements)
references
References 51 publications
(129 reference statements)
0
3
0
Order By: Relevance
“…In this context, early studies often consider only a single skill and solve problems like how a worker's capability on this skill can be accurately estimated [20], and how can the employers identify the most qualified worker for a skill through social networks [5]. More recently, an increasing number of studies have been conducted to examine scenarios where each worker possesses multiple skills [2,9,15,27,31,32]. Different algorithms have been proposed to match a task to the "right" worker who is most capable on the skill that the task requires, while the estimation of worker's capacity on each skill keeps updated in an online fashion.…”
Section: Related Workmentioning
confidence: 99%
“…In this context, early studies often consider only a single skill and solve problems like how a worker's capability on this skill can be accurately estimated [20], and how can the employers identify the most qualified worker for a skill through social networks [5]. More recently, an increasing number of studies have been conducted to examine scenarios where each worker possesses multiple skills [2,9,15,27,31,32]. Different algorithms have been proposed to match a task to the "right" worker who is most capable on the skill that the task requires, while the estimation of worker's capacity on each skill keeps updated in an online fashion.…”
Section: Related Workmentioning
confidence: 99%
“…However, all the above methods suffer time complexity issues since they need to solve optimization problems during inference. Meanwhile, in real-world crowdsourcing platforms, multiple time efficient functions like task assignment and convergence monitoring need frequent access to the temporal inference results Zheng et al [2015], so that previous methods can not fully satisfy the real-world requirements. INQUIRE Feng et al [2014] first addresses this problem via a trivial annotator modeling strategy and a straightforward weighted vote question model to incrementally infer truths and update models.…”
Section: Related Workmentioning
confidence: 99%
“…For instance, a study on quality control mechanisms proposed a set of indicators and a general framework covering more types of microtask, including open-ended answers [ 27 ]. This work obtained better outcomes when compared to the state-of-the-art methods, such as the traditional analysis of historical performance in crowd work [ 28 ]. Furthermore, a supervised ML model was proposed, where more types of crowd worker profile were detected, with a higher granularity [ 29 ].…”
Section: Introductionmentioning
confidence: 99%