Proceedings of the CHI Conference on Human Factors in Computing Systems 2024
DOI: 10.1145/3613904.3642834
|View full text |Cite
|
Sign up to set email alerts
|

If in a Crowdsourced Data Annotation Pipeline, a GPT-4

Zeyu He,
Chieh-Yang Huang,
Chien-Kuang Cornelia Ding
et al.

Abstract: Recent studies indicated GPT-4 outperforms online crowd workers in data labeling accuracy, notably workers from Amazon Mechanical Turk (MTurk). However, these studies were criticized for deviating from standard crowdsourcing practices and emphasizing individual workers' performances over the whole data-annotation process. This paper compared GPT-4 and an ethical and well-executed MTurk pipeline, with 415 workers labeling 3,177 sentence segments from 200 scholarly articles using the CODA-19 scheme. Two worker i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
references
References 43 publications
0
0
0
Order By: Relevance