2022
DOI: 10.1037/pha0000549
|View full text |Cite
|
Sign up to set email alerts
|

Are poor quality data just random responses?: A crowdsourced study of delay discounting in alcohol use disorder.

Abstract: Crowdsourced methods of data collection such as Amazon Mechanical Turk (MTurk) have been widely adopted in addiction science. Recent reports suggest an increase in poor quality data on MTurk, posing a challenge to the validity of findings. However, empirical investigations of data quality in addiction-related samples are lacking. In this study of individuals with alcohol use disorder (AUD), we compared poor quality delay discounting data to randomly generated data. A reanalysis of prior published delay discoun… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

2
19
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
10

Relationship

3
7

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 12 publications
2
19
1
Order By: Relevance
“…A key finding of this study is that nonsystematic responding during a DD task was a predictor of low-quality data across three independent quality check indexes (i.e., reporting daily cigarette usage with consistency; describing a choice task accurately; reporting individual and household income with consistency). A previous investigation in AUD reported that nonsystematic DD data were similar to randomly generated data on multiple metrics (Craft et al, 2022). The current finding supports the connection between nonsystematic DD and low-quality data.…”
Section: Discussionsupporting
confidence: 84%
“…A key finding of this study is that nonsystematic responding during a DD task was a predictor of low-quality data across three independent quality check indexes (i.e., reporting daily cigarette usage with consistency; describing a choice task accurately; reporting individual and household income with consistency). A previous investigation in AUD reported that nonsystematic DD data were similar to randomly generated data on multiple metrics (Craft et al, 2022). The current finding supports the connection between nonsystematic DD and low-quality data.…”
Section: Discussionsupporting
confidence: 84%
“…Violation of the second criterion suggests that reward valuation is not affected by delay. Excluding nonsystematic data is widely used in the field (Smith et al, 2018), and recent investigations have demonstrated that nonsystematic delay discounting is associated with poor data quality and random responding in online samples (Craft et al, 2022; Yeh et al, 2022). Prior to conducting analyses, we determined that 19 individuals failed attention check questions and 14 individuals violated one or more nonsystematic delay discounting criteria.…”
Section: Methodsmentioning
confidence: 99%
“…Conceptually related behavioral economic research shows the practical implications of these quality checks. Craft et al (2022) find that delay-discounting data from participants failing systematicity checks did not differ from randomly generated data highlighting the importance of screening and removing data with a priori validity checks. Freitas-Lemos et al (2022) describe how a novel check based on instructional understanding differentiated participants on consistency of cigarette use reporting, responding on a cigarette demand task, and the relationship between use behavior and demand data.…”
mentioning
confidence: 84%