2014
DOI: 10.15195/v1.a19
|View full text |Cite
|
Sign up to set email alerts
|

Comparing Data Characteristics and Results of an Online Factorial Survey between a Population-Based and a Crowdsource-Recruited Sample

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

26
345
1
3

Year Published

2015
2015
2023
2023

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 472 publications
(389 citation statements)
references
References 23 publications
26
345
1
3
Order By: Relevance
“…In all of our studies, we recruited participants from the subject pool of Amazon's Mechanical Turk (MTurk), an online crowdsourcing service with large volumes of small web-based tasks offered to anonymous online workers for monetary compensation. MTurk allows behavioral experiments to be run comparatively quickly and inexpensively, provides access to a broad cross-section of the population, and has repeatedly been shown to have the capacity to produce highly valid data (60)(61)(62)(63)(64)(65). The research was approved by the Institutional Review Board of the University of Arizona, and participants voluntarily agreed to take part after reading a disclosure form for research participation.…”
Section: Methodsmentioning
confidence: 99%
“…In all of our studies, we recruited participants from the subject pool of Amazon's Mechanical Turk (MTurk), an online crowdsourcing service with large volumes of small web-based tasks offered to anonymous online workers for monetary compensation. MTurk allows behavioral experiments to be run comparatively quickly and inexpensively, provides access to a broad cross-section of the population, and has repeatedly been shown to have the capacity to produce highly valid data (60)(61)(62)(63)(64)(65). The research was approved by the Institutional Review Board of the University of Arizona, and participants voluntarily agreed to take part after reading a disclosure form for research participation.…”
Section: Methodsmentioning
confidence: 99%
“…MTurk has become popular among social scientists, particularly for conducting survey experiments in the fields of psychology, political science, sociology, and health, among others (Campbell and Gaddis forthcoming;Dowling and Miller 2016;Horne et al 2015). Researchers have praised MTurk for its relatively low cost and quick turnaround for data and have offered cautious optimism regarding generalizability (Horton, Rand, and Zeckhauser 2011;Weinberg, Freese, and McElhattan 2014). Moreover, on a number of dimensions, MTurk represents a superior alternative to using undergraduate students, a ubiquitous sample in experimental psychology (Sears 1986).…”
Section: Using Amazon's Mechanical Turkmentioning
confidence: 99%
“…Workers on MTurk also lean towards more liberal attitudes and opinions (Berinsky et al 2012). There is some evidence that these demographic differences account for minimal differences in effect sizes between MTurk and other Internet survey platforms that claim representative samples (Weinberg et al 2014). Moreover, careful checks of moderating demographic variables that are not representative of the United States in MTurk samples and/or weighting may alleviate concerns regarding external validity (Mullinix et al 2016;Weinberg et al 2014).…”
Section: Internal and External Validity With Mturkmentioning
confidence: 99%
“…Berinsky, Huber, and Lenz 2012;Goodman, Cryder, and Cheema 2012;Horton, Rand, and Zeckhauser 2011;Krupnikov and Levine 2014;Paolacci, Chandler, and Ipeirotis 2010;Weinberg, Freese, and McElhattan 2014). 3 While these studies are impressive and telling, each includes only a small number of comparisons (e.g., three experiments) on a limited set of issues (e.g., three or four) and topics (e.g., question wording, framing) with few types of samples (e.g., three) at different points in time (e.g., data were collected on distinct samples far apart in time).…”
mentioning
confidence: 99%