Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/212
|View full text |Cite
|
Sign up to set email alerts
|

On the Cost Complexity of Crowdsourcing

Abstract: Existing efforts mainly use empirical analysis to evaluate the effectiveness of crowdsourcing methods, which is often unreliable across experimental settings. Consequently, it is of great importance to study theoretical methods. This work, for the first time, defines the cost complexity of crowdsourcing, and presents two theorems to compute the cost complexity. Our theorems provide a general theoretical method to model the trade-off between costs and quality, which can be used to evaluate and design crowdsourc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 13 publications
0
4
0
Order By: Relevance
“…We build our algorithm and analysis around this noise model but show that our analysis can be adapted to handle the case where the noise rates are conditioned on class membership. These noise models have been extensively studied in crowdsourcing literature [4,7,11,13,18,21] and are usually attributed to Dawid and Skene [5].…”
Section: Previous Workmentioning
confidence: 99%
See 1 more Smart Citation
“…We build our algorithm and analysis around this noise model but show that our analysis can be adapted to handle the case where the noise rates are conditioned on class membership. These noise models have been extensively studied in crowdsourcing literature [4,7,11,13,18,21] and are usually attributed to Dawid and Skene [5].…”
Section: Previous Workmentioning
confidence: 99%
“…PAC learning in crowdsourcing. Feng et al [7], in very recent work, develop PAC-style bounds for the cost complexity of learning an aggregation function that fits a crowd of workers with varying reliabilities. They focus on using PAC learning to train an aggregation function for the workers' labels; we, however, focus on using PAC learning to train a classifier that generalizes from worker labels.…”
Section: Previous Workmentioning
confidence: 99%
“…We build our algorithm and analysis around this noise model but show that our analysis can be adapted to handle the case where the noise rates are conditioned on class membership. These noise models have been extensively studied in crowdsourcing literature (Cao et al 2015;Fang et al 2018; Kang and Tay 2018;Li, Yu, and Zhou 2013;Wang and Zhou 2015;Zhou, Chen, and Li 2014) and are usually attributed to Dawid and Skene (1979).…”
Section: Previous Workmentioning
confidence: 99%
“…A significant prerequisite for the use of supervised learning is the availability of large, well-labeled datasets. Crowdsourcing (Zheng et al 2017;Fang et al 2018;Tong et al 2020) offers an affordable method of annotating data by using freelance workers located on the internet platforms such as Amazon Mechanical Turk 1 (AMT). When it comes to crowdsourced labeled data, a general rule of thumb is that the annotations may often be noisy because of the unskilled or malevolent behavior of workers.…”
Section: Introductionmentioning
confidence: 99%