2019
DOI: 10.1080/13645579.2018.1563966
|View full text |Cite
|
Sign up to set email alerts
|

How serious is the ‘carelessness’ problem on Mechanical Turk?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

2
38
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 67 publications
(41 citation statements)
references
References 16 publications
2
38
0
1
Order By: Relevance
“…Although researchers often do not report HIT requirements, it is important to note that the current research used HIT qualifications that were less stringent than previous recommendations (Peer, Vosgerau, & Acquisti, 2014), which could have impacted the generalizability of our results. However, the percentage of participants failing validity indicators in Waves 3 and 4 is similar to informal reports and recent publications (Aruguete et al, 2019; Courrégé, Skeel, Feder, & Boress, 2019; Dreyfuss, 2018). Moreover, the results were nearly identical in an additional Wave (4a) of data collected using HIT requirements commonly reported in the literature.…”
Section: Discussionsupporting
confidence: 88%
“…Although researchers often do not report HIT requirements, it is important to note that the current research used HIT qualifications that were less stringent than previous recommendations (Peer, Vosgerau, & Acquisti, 2014), which could have impacted the generalizability of our results. However, the percentage of participants failing validity indicators in Waves 3 and 4 is similar to informal reports and recent publications (Aruguete et al, 2019; Courrégé, Skeel, Feder, & Boress, 2019; Dreyfuss, 2018). Moreover, the results were nearly identical in an additional Wave (4a) of data collected using HIT requirements commonly reported in the literature.…”
Section: Discussionsupporting
confidence: 88%
“…Recently, such pools have been under question for the reliability of the data collected from MTurk workers (e.g. Aruguete et al, 2019;Rouse, 2015). Additionally, researchers have highlighted the concern of professional participants, or "super workers," and bots in these sample pools (Chandler, Mueller, & Paolacci, 2014;Chmielewski & Kucker, 2019;Dennis et al, 2019;Fort, Adda, & Cohen, 2011;Stewart et al, 2015).…”
Section: Recruit From a Reliable Sourcementioning
confidence: 99%
“…Then, we used a questionnaire for data reliability validation and weights for consistency comparison, except in the case of the HITAR. However, we needed to consider the casual observation characteristics of MTurk workers, because in this experiment, there was no HITAR constraint [100]. Therefore, we added five questions that induced the participants to enter as many words as possible in the LSA index of the DRW [65].…”
Section: Observation Of Future Reward Distribution Effectmentioning
confidence: 99%