Amazon Mechanical Turk (MTurk) is widely used by behavioral scientists to recruit research participants. MTurk offers advantages over traditional student subject pools, but it also has important limitations. In particular, the MTurk population is small and potentially overused, and some groups of interest to behavioral scientists are underrepresented and difficult to recruit. Here we examined whether online research panels can avoid these limitations. Specifically, we compared sample composition, data quality (measured by effect sizes, internal reliability, and attention checks), and the non-naivete of participants recruited from MTurk and Prime Panels—an aggregate of online research panels. Prime Panels participants were more diverse in age, family composition, religiosity, education, and political attitudes. Prime Panels participants also reported less exposure to classic protocols and produced larger effect sizes, but only after screening out several participants who failed a screening task. We conclude that online research panels offer a unique opportunity for research, yet one with some important trade-offs.Electronic supplementary materialThe online version of this article (10.3758/s13428-019-01273-7) contains supplementary material, which is available to authorized users.
In this study, we examined data quality among Amazon Mechanical Turk (MTurk) workers based in India, and the effect of monetary compensation on their data quality. Recent studies have shown that work quality is independent of compensation rates, and that compensation primarily affects the quantity but not the quality of work. However, the results of these studies were generally based on compensation rates below the minimum wage, and far below a level that was likely to play a practical role in the lives of workers. In this study, compensation rates were set around the minimum wage in India. To examine data quality, we developed the squared discrepancy procedure, which is a task-based quality assurance approach for survey tasks whose goal is to identify inattentive participants. We showed that data quality is directly affected by compensation rates for India-based participants. We also found that data were of a lesser quality among India-based than among US participants, even when optimal payment strategies were utilized. We additionally showed that the motivation of MTurk users has shifted, and that monetary compensation is now reported to be the primary reason for working on MTurk, among both US- and India-based workers. Overall, MTurk is a constantly evolving marketplace where multiple factors can contribute to data quality. High-quality survey data can be acquired on MTurk among India-based participants when an appropriate pay rate is provided and task-specific quality assurance procedures are utilized.
Purpose: This study assesses distress/anxiety and predictors of distress/anxiety associated with quarantine due to COVID-19 exposure among the first quarantined community in the US, and to identify potential areas of intervention.Design: An anonymous survey was distributed via community organization distribution lists to approximately 1250 constituents under a quarantine directive.Setting: Members of the first community in the NYC area under quarantine orders due to the 2020 COVID-19 outbreak.Intervention: We sought to uncover the most salient predictors of distress/anxiety in order to recommend specific areas for effective intervention to reduce distressMeasures: We measured distress by using the Subjective Units of Distress Scale and anxiety with the Beck Anxiety Inventory. A variety of psychosocial predictors relevant to the current crisis were explored. Results: 303 individuals responded within 48 hours of survey distribution. Mean levels of distress in the sample were heightened and sustained, with 69% reporting moderate to severe distress. Modifiable behavioral factors, specifically with regard to media exposure and sleep quality, predicted the largest percentage of variance in the sample (41.9%, F (3, 264) = 40.7, R = 0.65, p < .001).Conclusion: Distress levels were markedly elevated among those in quarantine. The highest percentage of distress/anxiety variance was accounted for by modifiable factors amenable to behavioral and psychological interventions, including promoting healthy sleep and curtailing media use. Access to professional mental health care as well as behavioral interventions should be prioritized.
Mechanical Turk (MTurk) is a common source of research participants within the academic community. Despite MTurk’s utility and benefits over traditional subject pools some researchers have questioned whether it is sustainable. Specifically, some have asked whether MTurk workers are too familiar with manipulations and measures common in the social sciences, the result of many researchers relying on the same small participant pool. Here, we show that concerns about non-naivete on MTurk are due less to the MTurk platform itself and more to the way researchers use the platform. Specifically, we find that there are at least 250,000 MTurk workers worldwide and that a large majority of US workers are new to the platform each year and therefore relatively inexperienced as research participants. We describe how inexperienced workers are excluded from studies, in part, because of the worker reputation qualifications researchers commonly use. Then, we propose and evaluate an alternative approach to sampling on MTurk that allows researchers to access inexperienced participants without sacrificing data quality. We recommend that in some cases researchers should limit the number of highly experienced workers allowed in their study by excluding these workers or by stratifying sample recruitment based on worker experience levels. We discuss the trade-offs of different sampling practices on MTurk and describe how the above sampling strategies can help researchers harness the vast and largely untapped potential of the Mechanical Turk participant pool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.