2013
DOI: 10.14236/ewic/sohuman2013.2
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Expertise Assessment on the Quality of Task Routing in Human Computation

Abstract: Linguistic field research depends on collecting phrases and sentences as well as their geographical and social characteristics. The traditional method of field research -researchers asking questions and filling forms-is time-consuming, costly, and not free of biases. This article presents metropolitalia, a Web-based crowdsourcing platform for linguistic field research aiming at overcoming some of the drawbacks of traditional linguistic field research. metropolitalia is built upon Agora, a market for trading wi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 19 publications
(17 reference statements)
0
11
0
Order By: Relevance
“…Task assignment, i.e., the intelligent matching of tasks with the most appropriate workers, is a fundamental challenge of crowdsourcing [20][21][22]. Although there have been several studies on conventional crowdsourcing task allocation, they cannot be directly applied to spatial crowdsourcing, because the location of the spatial task and that of the workers are vital for the result of the spatial task assignment.…”
Section: Related Workmentioning
confidence: 99%
“…Task assignment, i.e., the intelligent matching of tasks with the most appropriate workers, is a fundamental challenge of crowdsourcing [20][21][22]. Although there have been several studies on conventional crowdsourcing task allocation, they cannot be directly applied to spatial crowdsourcing, because the location of the spatial task and that of the workers are vital for the result of the spatial task assignment.…”
Section: Related Workmentioning
confidence: 99%
“…As common practice [26], we adopt assessment tasks to quantify reliability: in each game, we compute player reliability as a function of the number of errors the player makes on those assessment tasks. The reliability value is 1 if the player makes no mistakes and decreases toward zero with increasing errors.…”
Section: Atomic Tasks and Truth Inferencementioning
confidence: 99%
“…In [13], test questions created from a generalized knowledge base are used to estimate the reliability of the new workers. Their result suggest that this approach performs better than using gold-standard tasks automated selection of knowledge base questions for quality control [12] used a hybrid approach of self-rating and gold-standard task for estimating the expertise of workers, however the self-assessment does not ensure high accuracy on the actual tasks.…”
Section: Related Workmentioning
confidence: 99%