The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.1080/19312458.2017.1317338
|View full text |Cite
|
Sign up to set email alerts
|

Content Analysis by the Crowd: Assessing the Usability of Crowdsourcing for Coding Latent Constructs

Abstract: Crowdsourcing platforms are commonly used for research in the humanities, social sciences and informatics, including the use of crowdworkers to annotate textual material or visuals. Utilizing two empirical studies, this article systematically assesses the potential of crowdcoding for less manifest contents of news texts, here focusing on political actor evaluations. Specifically, Study 1 compares the reliability and validity of crowdcoded data to that of manual content analyses; Study 2 proceeds to investigate… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
39
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 53 publications
(44 citation statements)
references
References 37 publications
1
39
0
Order By: Relevance
“…Given that coders are treated as interchangeable, any (potentially) remaining coder idiosyncrasies (either coder-specific systematic errors or random measurement errors) are in effect no longer considered, neither in the analyses nor in the interpretations of the findings (see , for a detailed discussion on this issue). When there is a sufficiently large number of coders, or each materials are coded by multiple coders ("duplicated coding" as in some SML applications or in crowdcoding: see Lind, Gruber, & Boomgaarden, 2017;Scharkow, 2013), the impact of coder idiosyncrasiesespecially random errorswould diminish, as they will cancel each other out as long as the number of coders/ duplicated coding instances increases. Nevertheless, remaining systematic errors in coder idiosyncrasies may still introduce bias in gold standard materials with respect to the target of inference, especially for data with a higher level of intercoder reliability (i.e., a systematic deviation from the true target).…”
Section: Design and Setup Of Monte Carlo Simulationsmentioning
confidence: 99%
“…Given that coders are treated as interchangeable, any (potentially) remaining coder idiosyncrasies (either coder-specific systematic errors or random measurement errors) are in effect no longer considered, neither in the analyses nor in the interpretations of the findings (see , for a detailed discussion on this issue). When there is a sufficiently large number of coders, or each materials are coded by multiple coders ("duplicated coding" as in some SML applications or in crowdcoding: see Lind, Gruber, & Boomgaarden, 2017;Scharkow, 2013), the impact of coder idiosyncrasiesespecially random errorswould diminish, as they will cancel each other out as long as the number of coders/ duplicated coding instances increases. Nevertheless, remaining systematic errors in coder idiosyncrasies may still introduce bias in gold standard materials with respect to the target of inference, especially for data with a higher level of intercoder reliability (i.e., a systematic deviation from the true target).…”
Section: Design and Setup Of Monte Carlo Simulationsmentioning
confidence: 99%
“…Crowd-coding is both hailed as a useful strategy but also viewed critically (Snow et al 2008, Benoit et al 2016, Lind et al 2017, Dreyfuss 2018. Because Krippendorf's alpha was not higher for certain categories we carried out additonal analyses to see whether our results remain robust to the exclusion of certain workers.…”
Section: Crowd-coding Of Open-ended Responsesmentioning
confidence: 99%
“…This simple but powerful idea that good collective decisions can emanate from various averaged independent judgements of non-experts is long discussed in academia, business and popular science (see Surowiecki 2004;Lehman & Zobel 2017). Yet, notwithstanding instructive earlier studies with positive conclusions regarding the validity of crowd-coded data (e.g., Berinsky et al 2014;Haselmayer & Jenny 2016;Lind et al 2017), it seems fair to say that crowd-coding is only starting to gain traction in political science at large since Benoit et al (2016) have convincingly argued that the results of expert judgements -still considered the gold standard by many (e.g., when it comes to the location of parties) -can be matched with crowd-coding, at least for simple coding tasks. This is significant, since experts are expensive and in short supply and automated (coding) methods are not yet good enough at extracting meaning (Benoit et al 2016: 280).…”
Section: Introductionmentioning
confidence: 96%
“…; Haselmayer & Jenny ; Lind et al. ), it seems fair to say that crowd‐coding is only starting to gain traction in political science at large since Benoit et al. () have convincingly argued that the results of expert judgements – still considered the gold standard by many (e.g., when it comes to the location of parties) – can be matched with crowd‐coding, at least for simple coding tasks.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation