2023
DOI: 10.1371/journal.pone.0279720
|View full text |Cite
|
Sign up to set email alerts
|

Data quality in online human-subjects research: Comparisons between MTurk, Prolific, CloudResearch, Qualtrics, and SONA

Abstract: With the proliferation of online data collection in human-subjects research, concerns have been raised over the presence of inattentive survey participants and non-human respondents (bots). We compared the quality of the data collected through five commonly used platforms. Data quality was indicated by the percentage of participants who meaningfully respond to the researcher’s question (high quality) versus those who only contribute noise (low quality). We found that compared to MTurk, Qualtrics, or an undergr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

2
71
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 259 publications
(119 citation statements)
references
References 42 publications
2
71
0
Order By: Relevance
“…Fewer reports of mind wandering, media-related multitasking, and higher task performance provide convergent evidence for greater attentional engagement in the Proli c sample compared to the MTurk sample. These ndings corroborate observations from other studies showing signi cantly greater engagement in Proli c samples as indicated by more correct responses to attention check questions 38,41,54 . A novel contribution of the present study, however, is its use of attention task performance and self-reports of disengagement, in terms of mind wandering and multitasking during an ongoing task to assess attentional engagement, instead of relying on attention to survey instructions or question wording, as in previous studies.…”
Section: Discussionsupporting
confidence: 91%
“…Fewer reports of mind wandering, media-related multitasking, and higher task performance provide convergent evidence for greater attentional engagement in the Proli c sample compared to the MTurk sample. These ndings corroborate observations from other studies showing signi cantly greater engagement in Proli c samples as indicated by more correct responses to attention check questions 38,41,54 . A novel contribution of the present study, however, is its use of attention task performance and self-reports of disengagement, in terms of mind wandering and multitasking during an ongoing task to assess attentional engagement, instead of relying on attention to survey instructions or question wording, as in previous studies.…”
Section: Discussionsupporting
confidence: 91%
“…Further replications are, nevertheless, advised. We also acknowledge that Study 5 is the only one that was carried out online, and while attention checks were included to assess fidelity and research indicates that data captured via Prolific are of good quality (Douglas et al, 2023), this does remove some of the methodological controls present during in-person testing. Future replications are therefore recommended, particularly to examine whether Study 5 findings translate into objective measures of affect.…”
Section: Discussionmentioning
confidence: 99%
“…About 50 participants were required for the main effect of tweet type; we then oversampled to ensure enough participants to test interaction effects and covariates. CloudResearch was chosen due to high data quality in our own past studies and higher quality relative to other data vendors 43 . To ensure high quality participants, we recruited participants to the baseline cohort by requiring a 99% -100% HIT approval rating and completion of at least 1,000 HITS, age over 18, and being within the United States.…”
Section: Methodsmentioning
confidence: 99%