2018
DOI: 10.1007/s11390-018-1823-6
|View full text |Cite
|
Sign up to set email alerts
|

Collusion-Proof Result Inference in Crowdsourcing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(7 citation statements)
references
References 31 publications
0
6
0
Order By: Relevance
“…We set the threshold for ab j as 0.5. The quality of the aggregated answers declines than the answer from the individual worker when ab j is less than 0.5 [12].…”
Section: Updating the Parametersmentioning
confidence: 93%
See 1 more Smart Citation
“…We set the threshold for ab j as 0.5. The quality of the aggregated answers declines than the answer from the individual worker when ab j is less than 0.5 [12].…”
Section: Updating the Parametersmentioning
confidence: 93%
“…1: Illustrating adversarial attacks in crowdsourcing systems, which comprises the main components, processes, and the attacking scenario in [11]. Moreover, the performance of the workers is closely related to the set of submissions of the workers [12]. We observe that incorporating these features and an iterative task allocation phase improves the performance of the system.…”
Section: Malicious Workers Normal Workersmentioning
confidence: 97%
“…Answer Aggregation in Crowdsourcing. In two-stage approaches, the true labels are inferred from the crowd labels using answer aggregation methods (Chen et al 2018(Chen et al , 2020a. Then, it applies the general supervised learning methods along with the inferred labels (Wang and Zhou 2016).…”
Section: Related Workmentioning
confidence: 99%
“…, K}. The colluding workers are more malicious, and they collude and try to guide the outcome of the majority vote [7], [8], [9]. In our experiment, a label is sampled from a uniform distribution over {1, .…”
Section: Robustness Against Harmful Workersmentioning
confidence: 99%
“…This is attributed to the significant variations in the abilities and motivations of the human workers, as well as the anonymity of crowdsourcing workers. In addition, there are spam workers who provide random answers without looking at the tasks [5], [6], as well as colluding workers who share their answers with other workers [7], [8], [9], which also contribute to the variability in the reliability. To reduce the impact of incorrect responses, several studies have attempted to improve the quality by aggregating multiple redundantly collected responses from different workers.…”
Section: Introductionmentioning
confidence: 99%