2019
DOI: 10.1108/ijcs-06-2019-0017
|View full text |Cite
|
Sign up to set email alerts
|

Quality assessment in crowdsourced classification tasks

Abstract: Purpose -Ensuring quality is one of the most significant challenges in microtask crowdsourcing tasks.Aggregation of the collected data from the crowd is one of the important steps to infer the correct answer, but the existing study seems to be limited to the single-step task. This study aims to look at multiple-step classification tasks and understand aggregation in such cases; hence, it is useful for assessing the classification quality.Design/methodology/approach -The authors present a model to capture the i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 41 publications
(52 reference statements)
0
1
0
Order By: Relevance
“…Bu et al [20] propose a graph model to handle both single-and multiple-step classification tasks and try to "infer the correct label path". They present an "adapted aggregation method" for three existing inference algorithms, namely 'majority voting', 'expectationmaximization' [21] and 'message-passing' [22].…”
Section: Related Workmentioning
confidence: 99%
“…Bu et al [20] propose a graph model to handle both single-and multiple-step classification tasks and try to "infer the correct label path". They present an "adapted aggregation method" for three existing inference algorithms, namely 'majority voting', 'expectationmaximization' [21] and 'message-passing' [22].…”
Section: Related Workmentioning
confidence: 99%