2018
DOI: 10.1111/1475-6765.12278
|View full text |Cite
|
Sign up to set email alerts
|

Can the online crowd match real expert judgments? How task complexity and coder location affect the validity of crowd‐coded data

Abstract: Crowd‐coding is a novel technique that allows for fast, affordable and reproducible online categorisation of large numbers of statements. It combines judgements by multiple, paid, non‐expert coders to avoid miscoding(s). It has been argued that crowd‐coding could replace expert judgements, using the coding of political texts as an example in which both strategies produce similar results. Since crowd‐coding yields the potential to extend the replication standard to data production and to ‘scale’ coding schemes … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 12 publications
0
7
0
1
Order By: Relevance
“…Each RA is responsible for tracking government policy actions for at least one country. RAs were allocated on the basis of their background, language skills and expressed interest in certain countries 87 . Note that depending on the level of policy coordination at the national level, certain countries were assigned multiple RAs, for example, the United States, Germany and France.…”
Section: Data Collection Methodologymentioning
confidence: 99%
“…Each RA is responsible for tracking government policy actions for at least one country. RAs were allocated on the basis of their background, language skills and expressed interest in certain countries 87 . Note that depending on the level of policy coordination at the national level, certain countries were assigned multiple RAs, for example, the United States, Germany and France.…”
Section: Data Collection Methodologymentioning
confidence: 99%
“…Studies of the quality of crowd sourced data indicate that these worries are not entirely justified. Lind et al (2017) find that crowd sourced analyses from paid volunteers is comparable to analyses produced by five research assistants, but that there is variability across different types of tasks, and within groups of volunteers, Especially the complexity of the task that the data comes from is central when determining validity of crowd-sourced data (Horn 2018;Shing et al 2018). Quality of crowd sourced language data concerns primarily its reliability and ecological validity: whether the method provides results that are the same as those found with a different method, and whether the properties of the data can be said to be equal, i.e.…”
Section: Data Quality Concerns In Smartphone Researchmentioning
confidence: 99%
“…According to Morris (1977:679), experts have special knowledge about the topic and the material to be coded. Crowdcoders are very lightly trained, have no expertise on the specific topic, and are typically associated with enterprises such as Crowdflower and Mechanical Turk (Cabrera and Reiner 2018;Horn 2018;Lind, Gruber, and Boomgaarden 2017). Trained coders stand somewhere between experts and crowd-coders and are generally prescreened for suitability and trained to complete the task.…”
Section: A Theoretical Framework Of the Coding Ecosystemmentioning
confidence: 99%