2018
DOI: 10.1002/jts.22303
|View full text |Cite
|
Sign up to set email alerts
|

Crowdsourcing Trauma: Psychopathology in a Trauma‐Exposed Sample Recruited via Mechanical Turk

Abstract: Although crowdsourcing websites like Amazon's Mechanical Turk (MTurk) allow researchers to conduct research efficiently, it is unclear if MTurk and traditionally recruited samples are comparable when assessing the sequela of traumatic events. We compared the responses to validated self-report measures of posttraumatic stress disorder (PTSD) and related constructs that were given by 822 participants recruited via MTurk and had experienced a DSM-5 Criterion A traumatic event to responses obtained in recent sampl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
52
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
9

Relationship

3
6

Authors

Journals

citations
Cited by 64 publications
(58 citation statements)
references
References 48 publications
5
52
1
Order By: Relevance
“…Fourth, to improve the MTurk data quality, we used validity checks and excluded individuals who were missing too much data (Aust, Diedenhofen, Ullrich, & Musch, 2013; Buhrmester et al., 2011; Oppenheimer et al., 2009). Further, the extent of our sample truncation (48%) is comparable to what has been reported in other MTurk trauma studies (i.e., 57%; van Stolk‐Cooke et al, 2018). However, such procedures could have created a potential selection bias in our study, which may limit the generalizability of findings.…”
Section: Discussionsupporting
confidence: 85%
“…Fourth, to improve the MTurk data quality, we used validity checks and excluded individuals who were missing too much data (Aust, Diedenhofen, Ullrich, & Musch, 2013; Buhrmester et al., 2011; Oppenheimer et al., 2009). Further, the extent of our sample truncation (48%) is comparable to what has been reported in other MTurk trauma studies (i.e., 57%; van Stolk‐Cooke et al, 2018). However, such procedures could have created a potential selection bias in our study, which may limit the generalizability of findings.…”
Section: Discussionsupporting
confidence: 85%
“…As a crowd‐sourcing platform, MTurk involves a requester/researcher posting a study (i.e., Human Intelligence Task) which is accessible to qualified participants (i.e., workers); participants are then compensated for the completed tasks (Rouse, 2015). MTurk's subject pool (1) is diverse compared to traditional internet‐recruited samples; (2) is representative of the general U.S. population on several demographic characteristics; (3) generates reliable data (Buhrmester, Kwang, & Gosling, 2011; Contractor & Weiss, 2019; Mischra & Carleton, 2017; Shapiro, Chandler, & Mueller, 2013); and (4) has demonstrated utility for trauma research in capturing individuals with PTSD severity in a cost‐ and time‐effective manner (van Stolk‐Cooke et al, 2018). This being said, data collection via the internet (e.g., MTurk) may include sample biases because of self‐selection (Kraut et al, 2004); limited control over the research environment (e.g., no opportunity to clarify questions; Kraut et al, 2004); and deception, such as attempting the survey mulitple times or faking study eligibility (Hauser, Paolacci, & Chandler, 2019).…”
Section: Methodsmentioning
confidence: 99%
“…Recruitment occurred via Amazon's Mechanical Turk (MTurk), an online survey platform. Samples recruited via MTurk provide valid clinical and community data that is comparable to trauma exposed samples recruited via traditional methods (Shapiro et al, 2013;van Stolk-Cooke et al, 2018). Such samples are more demographically diverse than typical community cohorts (Buhrmester et al, 2011).…”
Section: Participants and Proceduresmentioning
confidence: 98%