“…As a crowd‐sourcing platform, MTurk involves a requester/researcher posting a study (i.e., Human Intelligence Task) which is accessible to qualified participants (i.e., workers); participants are then compensated for the completed tasks (Rouse, 2015). MTurk's subject pool (1) is diverse compared to traditional internet‐recruited samples; (2) is representative of the general U.S. population on several demographic characteristics; (3) generates reliable data (Buhrmester, Kwang, & Gosling, 2011; Contractor & Weiss, 2019; Mischra & Carleton, 2017; Shapiro, Chandler, & Mueller, 2013); and (4) has demonstrated utility for trauma research in capturing individuals with PTSD severity in a cost‐ and time‐effective manner (van Stolk‐Cooke et al, 2018). This being said, data collection via the internet (e.g., MTurk) may include sample biases because of self‐selection (Kraut et al, 2004); limited control over the research environment (e.g., no opportunity to clarify questions; Kraut et al, 2004); and deception, such as attempting the survey mulitple times or faking study eligibility (Hauser, Paolacci, & Chandler, 2019).…”