Many claims are being made about the advantages of conducting surveys on the Web. However, there has been little research on the effects of format or design on the levels of unit and item response or on data quality. In a study conducted at the University of Michigan, a number of experiments were added to a survey of the student population to assess the impact of design features on resulting data quality. A sample of 1,602 students was sent an e-mail invitation to participate in a Web survey on attitudes toward affirmative action. Three experiments on design approaches were added to the survey application. One experiment varied whether respondents were reminded of their progress through the instrument. In a second experiment, one version presented several related items on one screen, while the other version presented one question per screen. In a third experiment, for one series of questions a random half of the sample clicked radio buttons to indicate their answers, while the other half entered a numeric response in a box. This article discusses the overall implementation and outcome of the survey, and it describes the results of the imbedded design experiments.
Election administrators and public officials often consider changes in electoral laws, hoping that these changes will increase voter turnout and make the electorate more reflective of the voting-age population. The most recent of these innovations is voting-by-mail (VBM), a procedure by which ballots are sent to an address for every registered voter. Over the last 2 decades, VBM has spread across the United States, unaccompanied by much empirical evaluation of its impact on either voter turnout or the stratification of the electorate. In this study, we fill this gap in our knowledge by assessing the impact of VBM in one state, Oregon. We carry out this assessment at the individual level, using data over a range of elections. We argue that VBM does increase voter turnout in the long run, primarily by making it easier for current voters to continue to participate, rather than by mobilizing nonvoters into the electorate. These effects, however, are not uniform across all groups in the electorate. Although VBM in Oregon does not exert any influence on the partisan composition of the electorate, VBM increases, rather than diminishes, the resource stratification of the electorate. Contrary to the expectations of many reformers, VBM advantages the resource-rich by keeping them in the electorate, and VBM does little to change the behavior of the resource-poor. In short, VBM increases turnout, but it does so without making the electorate more descriptively representative of the voting-age population.
This paper presents the results of a project which validated the reported registration and voting behavior of respondents in a national election study. The accuracy of reported voting behavior in the 1976 general election is assessed in terms of the demographic characteristics of the respondents to the Center for Political Studies National Election Study as well as the extent of their participation in a survey panel begun in 1972. Increased levels of registration and turnout are observed in association with the number of interviews in which respondents participated, and three alternative social psychological models of the effects of preelection interviews are evaluated. Although the interview apparently served as a stimulus to voting, neither a model associated with self-concept theory nor alienation theory appears to explain the phenomenon adequately. The interview effect is significant and appears to be cumulative, indicating that researchers using the survey method with panel designs should be sensitive to the effects of their method on the behavior which they are trying to measure.Michael W.
FOR some time, political scientists have been developing and testing models of electoral behavior based upon measurements containing a relatively large amount of error. Even when the underlying concepts implied very simple and straightforward measures of recall or reports of political participation, such as registration status or voting behavior, a variety of social psychological pressures were known to result in systematic overreports of eligibility and participation (Dinerman, 1948; Miller, 1952; Parry and Crosley, 1950; Cahalan, 1968). And the magnitude of the reporting errors has been substantial, in the range of 15 to 25 percent greater self-reported participation rates than validated ones (Clausen, Abstract This paper reports on the results of validation of the self-reported registration status and voting behavior of respondents in the 1976 and 1978 American National Election Studies.The results indicate about one in seven of the respondents misreported their registration status or voting behavior. Comparative analyses are conducted using simple regression models to see if differences in their explanatory power arise using validated and self-reported dependent variables. The results show that there are no major changes in the fundamental nature of basic relationships that have been observed since the first surveys were conducted. Analysis of the effects of overreported participation on estimates of the partisan division of the vote in three sets of subnational contests reveals a likely "bandwagon" effect. John P.
This article proposes a new measure of the predictive accuracy (A) of election polls that permits examination of both accuracy and bias, and it applies the new measure to summarize the results of a number of preelection polls. We first briefly review past measures of accuracy, then introduce the new measure. After the new measure is described, the general strategy is to apply it to three presidential elections (1948, 1996, and 2000) and to compare the results derived from it to the results obtained with the Mosteller measures. Then, the new measure is applied to the results of 548 state polls from gubernatorial and senatorial races in the 2002 elections to illustrate its application to a large body of preelection polls conducted in "off-year" races with different outcomes. We believe that this new measure will be useful as a summary measure of accuracy in election forecasts. It is easily computed and summarized, and it can be used as a dependent variable in multivariate statistical analyses of the nature and extent of biases that affect election forecasts and to identify their potential sources. It is comparable across elections with different outcomes and among polls that vary in their treatment or numbers of undecided voters.
One interpretation for the common survey finding that the background characteristics of vote overreporters resemble those of actual voters is that misreporters usually vote. This hypothesis-that misreporters regularly voted in earlier elections-is tested with data from the 1972-74-76 Michigan Election Panel. It receives no support: the 1972 and 1974 validated turnout of the 1976 misreporters was very low. Moreover, misreporting was a fairly stable respondent characteristic: misreporting about an election in one interview was correlated with misreporting about the remaining elections in each of the other two interviews. A comparison of regressions predicting turnout using the validated reports versus the self-reports shows that the respondent errors can distort conclusions about the correlates of voting. For example, controlling for three other variables, education was related to self-reported voting but not to validated voting. Here, as well as in surveys of other socially desirable or undesirable issues, respondent self-reports may bias survey data in favor of commonsense models of the world.Although in principle there are many different ways to study the social world (Webb et al. 1981), in practice the findings of contemporary social science are based to a remarkable degree on the accounts people give of themselves.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.