Good survey and experimental research requires subjects to pay attention to questions and treatments, but many subjects do not. In this article, we discuss "Screeners" as a potential solution to this problem. We first demonstrate Screeners' power to reveal inattentive respondents and reduce noise. We then examine important but understudied questions about Screeners. We show that using a single Screener is not the most effective way to improve data quality. Instead, we recommend using multiple items to measure attention. We also show that Screener passage correlates with politically relevant characteristics, which limits the generalizability of studies that exclude failers. We conclude that attention is best measured using multiple Screener questions and that studies using Screeners can balance the goals of internal and external validity by presenting results conditional on different levels of attention. G ood survey and experimental research requires subjects to pay attention to questions and treatments, but not all people pay close attention all of the time. When respondents do not read questions carefully, their responses on related survey items can appear to be unrelated; when subjects do not pay attention to experimental treatments, replications of classic experiments can produce null results. As self-administered surveysboth online and in the lab-continue to grow in popularity, problems arising from inattentive respondents will also grow. Researchers must consider how best to identify and handle inattentive respondents.Instructional Manipulation Checks (IMCs), or "Screeners," are a potential solution to this problem and are increasingly common in political science and psychology (Oppenheimer, Meyvis, and Davidenko 2009).
Whether public policy affects electoral politics is an enduring question with an elusive answer. We identify the impact of the highly contested Patient Protection and Affordable Care Act (ACA) of 2010 by exploiting cross-state variation created by the 2012 Supreme Court decision inNational Federation of Independent Business v. Sebelius. We compare changes in registration and turnout following the expansion of Medicaid in January of 2014 to show that counties in expansion states experience higher political participation compared to similar counties in nonexpansion states. Importantly, the increases we identify are concentrated in counties with the largest percentage of eligible beneficiaries. The effect on voter registration persists through the 2016 election, but an impact on voter turnout is only evident in 2014. Despite the partisan politics surrounding the ACA–a political environment that differs markedly from social programs producing policy feedbacks in the past—our evidence is broadly consistent with claims that social policy programs can produce some political impacts, at least in the short-term.
Inattentive respondents introduce noise into data sets, weakening correlations between items and increasing the likelihood of null findings. “Screeners” have been proposed as a way to identify inattentive respondents, but questions remain regarding their implementation. First, what is the optimal number of Screeners for identifying inattentive respondents? Second, what types of Screener questions best capture inattention? In this paper, we address both of these questions. Using item-response theory to aggregate individual Screeners we find that four Screeners are sufficient to identify inattentive respondents. Moreover, two grid and two multiple choice questions work well. Our findings have relevance for applied survey research in political science and other disciplines. Most importantly, our recommendations enable the standardization of Screeners on future surveys.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.