We examine the trade-offs associated with using Amazon.com's Mechanical Turk (MTurk) interface for subject recruitment. We first describe MTurk and its promise as a vehicle for performing low-cost and easy-to-field experiments. We then assess the internal and external validity of experiments performed using MTurk, employing a framework that can be used to evaluate other subject pools. We first investigate the characteristics of samples drawn from the MTurk population. We show that respondents recruited in this manner are often more representative of the U.S. population than in-person convenience samples-the modal sample in published experimental political science-but less representative than subjects in Internet-based panels or national probability samples. Finally, we replicate important published experimental work using MTurk samples.
Addressing fake news requires a multidisciplinary effort
Good survey and experimental research requires subjects to pay attention to questions and treatments, but many subjects do not. In this article, we discuss "Screeners" as a potential solution to this problem. We first demonstrate Screeners' power to reveal inattentive respondents and reduce noise. We then examine important but understudied questions about Screeners. We show that using a single Screener is not the most effective way to improve data quality. Instead, we recommend using multiple items to measure attention. We also show that Screener passage correlates with politically relevant characteristics, which limits the generalizability of studies that exclude failers. We conclude that attention is best measured using multiple Screener questions and that studies using Screeners can balance the goals of internal and external validity by presenting results conditional on different levels of attention. G ood survey and experimental research requires subjects to pay attention to questions and treatments, but not all people pay close attention all of the time. When respondents do not read questions carefully, their responses on related survey items can appear to be unrelated; when subjects do not pay attention to experimental treatments, replications of classic experiments can produce null results. As self-administered surveysboth online and in the lab-continue to grow in popularity, problems arising from inattentive respondents will also grow. Researchers must consider how best to identify and handle inattentive respondents.Instructional Manipulation Checks (IMCs), or "Screeners," are a potential solution to this problem and are increasingly common in political science and psychology (Oppenheimer, Meyvis, and Davidenko 2009).
Many political scientists and policymakers argue that unmediated events—the successes and failures on the battlefield—determine whether the mass public will support military excursions. The public supports war, the story goes, if the benefits of action outweigh the costs of conflict. Other scholars contend that the balance of elite discourse influences public support for war. I draw upon survey evidence from World War II and the current war in Iraq to come to a common conclusion regarding public support for international interventions. I find little evidence that citizens make complex cost/benefit calculations when evaluating military action. Instead, I find that patterns of elite conflict shape opinion concerning war. When political elites disagree as to the wisdom of intervention, the public divides as well. But when elites come to a common interpretation of a political reality, the public gives them great latitude to wage war.
This article explores belief in political rumors surrounding the health care reforms enacted by Congress in 2010. Refuting rumors with statements from unlikely sources can, under certain circumstances, increase the willingness of citizens to reject rumors regardless of their own political predilections. Such source credibility effects, while well known in the political persuasion literature, have not been applied to the study of rumor. Though source credibility appears to be an effective tool for debunking political rumors, risks remain. Drawing upon research from psychology on 'fluency' -the ease of information recall -this article argues that rumors acquire power through familiarity. Attempting to quash rumors through direct refutation may facilitate their diffusion by increasing fluency. The empirical results find that merely repeating a rumor increases its power.
This study investigated the cognitive processing of true and false political information. Specifically, it examined the impact of source credibility on the assessment of veracity when information comes from a polarizing source (Experiment 1), and effectiveness of explanations when they come from one's own political party or an opposition party (Experiment 2). These experiments were conducted prior to the 2016 Presidential election. Participants rated their belief in factual and incorrect statements that President Trump made on the campaign trail; facts were subsequently affirmed and misinformation retracted. Participants then re-rated their belief immediately or after a delay. Experiment 1 found that (i) if information was attributed to Trump, Republican supporters of Trump believed it more than if it was presented without attribution, whereas the opposite was true for Democrats and (ii) although Trump supporters reduced their belief in misinformation items following a correction, they did not change their voting preferences. Experiment 2 revealed that the explanation's source had relatively little impact, and belief updating was more influenced by perceived credibility of the individual initially purporting the information. These findings suggest that people use political figures as a heuristic to guide evaluation of what is true or false, yet do not necessarily insist on veracity as a prerequisite for supporting political candidates.
A number of electoral reforms have been enacted in the United States in the past three decades that are designed to increase turnout by easing restrictions on the casting of ballots. Both proponents and opponents of electoral reforms agree that these reforms should increase the demographic representativeness of the electorate by reducing the direct costs of voting, thereby increasing turnout among less-privileged groups who, presumably, are most sensitive to the costs of coming to the polls. In fact, these reforms have been greatly contested because both major political parties believe that increasing turnout among less-privileged groups will benefit Democratic politicians. I review evidence from numerous studies of electoral reform to demonstrate that reforms designed to make it easier for registered voters to cast their ballots actually increase, rather than reduce, socioeconomicbiases in the composition of the voting public. I conclude with a recommendation that we shift the focus of electoral reform from an emphasis on institutional changes to a concentration on political engagement.
Public opinion polls appear to be a more inclusive form of representation than traditional forms of political participation. However, under certain circumstances, aggregate public opinion may be a poor reflection of collective public sentiment. I argue that it may be difficult to gauge true aggregate public sentiment on certain socially sensitive issues. My analysis of NES data from 1992 reveals that public opinion polls overstate support for government efforts to integrate schools. Specifically, selection bias models reveal that some individuals who harbor anti-integrationist sentiments are likely to hide their socially unacceptable opinions behind a "don't know" response. As an independent confirmation of the selection bias correction technique, I find that the same methods which predict that opinion polls understate opposition to school integration also predict the results of the 1989 New York City mayoral election more accurately than the marginals of preelection tracking polls.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.