2022
DOI: 10.1109/jas.2022.105434
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Learning With Crowdsourcing: A Brief Review and Systematic Perspective

Abstract: Big data have the characteristics of enormous volume, high velocity, diversity, value-sparsity, and uncertainty, which lead the knowledge learning from them full of challenges. With the emergence of crowdsourcing, versatile information can be obtained on-demand so that the wisdom of crowds is easily involved to facilitate the knowledge learning process. During the past thirteen years, researchers in the AI community made great efforts to remove the obstacles in the field of learning from crowds. This concentra… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 26 publications
(8 citation statements)
references
References 95 publications
0
5
0
Order By: Relevance
“…The results of our research indicated that there could be a reduction in false positive labels when using majority labeling compared to the labels used by an individual annotator ( Table A3 ). Recent efforts have been made to outsource labeling to more annotators of lesser specialized experience as a way to reduce the time and cost of data gathering compared to sourcing and reimbursing field experts in the same tasks [ 44 ]. Several methods have been proposed to clean data labeled by multiple, less experienced annotators to obtain high-quality datasets efficiently, including using majority-vote labeling [ 45 , 46 , 47 ].…”
Section: Discussionmentioning
confidence: 99%
“…The results of our research indicated that there could be a reduction in false positive labels when using majority labeling compared to the labels used by an individual annotator ( Table A3 ). Recent efforts have been made to outsource labeling to more annotators of lesser specialized experience as a way to reduce the time and cost of data gathering compared to sourcing and reimbursing field experts in the same tasks [ 44 ]. Several methods have been proposed to clean data labeled by multiple, less experienced annotators to obtain high-quality datasets efficiently, including using majority-vote labeling [ 45 , 46 , 47 ].…”
Section: Discussionmentioning
confidence: 99%
“…However, it is very prone to adversarial attacks and does not ensure the quality of crowdsourced answers. There are several methods for inferring the truth of tasks and for performing aggregation of submissions, such as [9,[15][16][17][18]. They are classified based on the underlying techniques as direct computation [15], probability-based methods [19], optimization methods [20], and neural network-based (NN) methods [21].…”
Section: Truth Inference and Quality Control In Crowdsourcingmentioning
confidence: 99%
“…However, all these methods do not have the advantages of the recently proposed neuralized HMM-based graphical models [18,19] and our Neural-Hidden-CRF in principled modeling for variants of interest and in harnessing the context information that provided by advanced deep learning models. Additionally, it is worth mentioning the presence of numerous established WS methods that address the normal independent classification scenario [3,5,[43][44][45].…”
Section: Related Workmentioning
confidence: 99%