Participant attentiveness is a concern for many researchers using Amazon's Mechanical Turk (MTurk). Although studies comparing the attentiveness of participants on MTurk versus traditional subject pool samples have provided mixed support for this concern, attention check questions and other methods of ensuring participant attention have become prolific in MTurk studies. Because MTurk is a population that learns, we hypothesized that MTurkers would be more attentive to instructions than are traditional subject pool samples. In three online studies, participants from MTurk and collegiate populations participated in a task that included a measure of attentiveness to instructions (an instructional manipulation check: IMC). In all studies, MTurkers were more attentive to the instructions than were college students, even on novel IMCs (Studies 2 and 3), and MTurkers showed larger effects in response to a minute text manipulation. These results have implications for the sustainable use of MTurk samples for social science research and for the conclusions drawn from research with MTurk and college subject pool samples.
"God" and "Devil" are abstract concepts often linked to vertical metaphors (e.g., "glory to God in the highest," "the Devil lives down in hell"). It is unknown, however, whether these metaphors simply aid communication or implicate a deeper mode of concept representation. In 6 experiments, the authors examined the extent to which the vertical dimension is used in noncommunication contexts involving God and the Devil. Experiment 1 established that people have implicit associations between God-Devil and up-down. Experiment 2 revealed that people encode God-related concepts faster if presented in a high (vs. low) vertical position. Experiment 3 found that people's memory for the vertical location of God- and Devil-like images showed a metaphor-consistent bias (up for God; down for Devil). Experiments 4, 5a, and 5b revealed that people rated strangers as more likely to believe in God when their images appeared in a high versus low vertical position, and this effect was independent of inferences related to power and likability. These robust results reveal that vertical perceptions are invoked when people access divinity-related cognitions.
Researchers are concerned about whether manipulations have the intended effects. Many journals and reviewers view manipulation checks favorably, and they are widely reported in prestigious journals. However, the prototypical manipulation check is a verbal (rather than behavioral) measure that always appears at the same point in the procedure (rather than its order being varied to assess order effects). Embedding such manipulation checks within an experiment comes with problems. While we conceptualize manipulation checks as measures, they can also act as interventions which initiate new processes that would otherwise not occur. The default assumption that manipulation checks do not affect experimental conclusions is unwarranted. They may amplify, undo, or interact with the effects of a manipulation. Further, the use of manipulation checks in mediational analyses does not rule out confounding variables, as any unmeasured variables that correlate with the manipulation check may still drive the relationship. Alternatives such as non-verbal and behavioral measures as manipulation checks and pilot testing are less problematic. Reviewers should view manipulation checks more critically, and authors should explore alternative methods to ensure the effectiveness of manipulations.
In this chapter, we outline the common concerns with MTurk as a participant pool, review the evidence for those concerns, and discuss solutions. We close with a Table of considerations that researchers should make when fielding a study on MTurk
Instructional manipulation checks (IMCs) have become popular tools for identifying inattentive participants in online studies. IMCs function by attempting to trick inattentive participants into responding incorrectly. However, from a conversational perspective, question characteristics are part of the researcher's contribution to the conversation, and IMCs may teach participants that there is "more than meets the eye," prompting systematic thinking on subsequent tricky-seeming questions in an attempt to avoid being tricked. In two online studies, participants responded to a simple task either before or after completing an IMC. As expected, answering an IMC prior to the task improved performance on items that benefit from increased systematic thinking-namely, the Cognitive Reflection Test (Study 1), and a probabilistic reasoning task (Study 2). We conclude that IMCs change attention rather than merely measure attention and discuss implications for their use in online studies.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.