Standard-Nutzungsbedingungen:Die Dokumente auf EconStor dürfen zu eigenen wissenschaftlichen Zwecken und zum Privatgebrauch gespeichert und kopiert werden.Sie dürfen die Dokumente nicht für öffentliche oder kommerzielle Zwecke vervielfältigen, öffentlich ausstellen, öffentlich zugänglich machen, vertreiben oder anderweitig nutzen.Sofern die Verfasser die Dokumente unter Open-Content-Lizenzen (insbesondere CC-Lizenzen) zur Verfügung gestellt haben sollten, gelten abweichend von diesen Nutzungsbedingungen die in der dort genannten Lizenz gewährten Nutzungsrechte. Terms of use: Documents in Non-technical summaryRising administration costs and falling response rates mean that many surveys that would previously have been carried out in one preferred mode of data collection are having to consider the use of mixed modes. For example, increasing numbers of surveys use a mix of modes, starting with a cheaper mode (such as telephone interviewing) which typically produces lower response rates, and following up non-respondents with face-to-face interviews. In order to decide about suitable data collection designs, survey practitioners must assess the trade-off between the potential advantages (for example in terms of financial costs and response rates) and disadvantages (for example in terms of data comparability) of mixing modes.We discuss some of the challenges in evaluating the effects of using mixed modes on measurement and hence data comparability. The main argument is that it is very difficult to provide the information survey practitioners would need, about whether and to what extent using mixed modes would affect substantive conclusions. We briefly review theories about why different modes can lead to differences in survey responses. We then discuss the methods typically used to assess mode effects on measurement and then focus on some of the challenges. These include 1) the need to avoid confounding effects and what kinds of mode effects are actually identified, 2) the sensitivity of conclusions about the existence of mode effects to statistical methods used for the analysis of experimental mode comparison data, 3) the difficulty of assessing whether measurement differences matter in practice, and 4) the assessment of which mode provides better measurement. The main focus of the paper is on analysis methods. The points raised for discussion here arose in the context of the European Social Survey (ESS), which is conducting a programme of experimental research to inform the decision about whether to allow telephone interviewing in addition to face-to-face in its future rounds. We use some examples from the ESS experiments to illustrate how we tried to deal with these issues and to stimulate discussion. The paper concludes with an outlook of how the findings from the experimental studies are informing the decision process about whether or not to mix modes of data collection on the ESS and with general implications for mixed modes research. Assessing the Effect of Data Collection Mode on MeasurementAnnette Jäckle, Car...
A persistent problem in the design of bipolar attitude questions is whether or not to include a middle response alternative. On the one hand, it is reasonable to assume that people might hold opinions which are `neutral’ with regard to issues of public controversy. On the other, question designers suspect that offering a mid-point may attract respondents with no opinion, or those who lean to one side of an issue but do not wish to incur the cognitive costs required to determine a directional response. Existing research into the effects of offering a middle response alternative has predominantly used a split-ballot design, in which respondents are assigned to conditions which offer or omit a midpoint. While this body of work has been useful in demonstrating that offering or excluding a mid-point substantially influences the answers respondents provide, it does not offer any clear resolution to the question of which format yields more accurate data. In this paper, we use a different approach. We use follow-up probes administered to respondents who initially select the mid-point to determine whether they selected this alternative in order to indicate opinion neutrality, or to indicate that they do not have an opinion on the issue. We find the vast majority of responses turn out to be what we term `face-saving don’t knows’ and that reallocating these responses from the mid-point to the don’t know category significantly alters descriptive and multivariate inferences. Counter to the survey-satisficing perspective, we find that those with this tendency is greatest amongst those who express more interest in the topic area.
No abstract
Opinion pollsters, political scientists and democratic theorists have long been concerned with the normative and methodological implications of nonattitudes (Converse, 1964). Of the proposed remedies to the weak and labile attitudinal responses proffered by an uninformed and disinterested public, perhaps the most ambitious to date has been Fishkin's concept of the deliberative poll (Fishkin, 1991;1997). Combining probability sampling with information intervention and increased deliberation affords a unique insight into what might be considered the true 'voice of the people'. Yet while deliberative polling draws heavily on the general notion of political sophistication (Luskin, 1987), empirical analyses have tended to focus almost entirely on how the process of deliberation impacts on marginal totals of attitude items at both the individual and aggregate level (Fishkin, 1997;Luskin, Fishkin and Jowell, 2002;Sturgis 2003). Little attention, in contrast, has been paid to outcomes that relate to other dimensions of opinion quality, such as attitude constraint. Constraint refers to the level of consistency between attitudes within an individual belief system which arises from a combination of logical, social and psychological factors (Converse 1964). In this paper we analyse data from five deliberative polls conducted in the UK in the 1990s to investigate the impact of political information and deliberation on attitude constraint. Across a broad range of issue areas we evaluate the extent to which the deliberative process impacts on statistical associations amongst attitude items between the first and subsequent waves of the polls. We conclude by discussing the implications of our results for the validity and reliability of survey measures of the attitude and the broader utility of the deliberative polling method as a tool of social scientific enquiry.
Herbert Simon’s (1956) concept of satisficing provides an intuitive explanation for the reasons why respondents to surveys sometimes adopt response strategies that can lead to a reduction in data quality. As such, the concept rapidly gained popularity among researchers after it was first introduced to the field of survey methodology by Krosnick and Alwin (1987), and it has become a widely cited buzzword linked to different forms of response error. In this article, we present the findings of a systematic review involving a content analysis of journal articles published in English-language journals between 1987 and 2015 that have drawn on the satisficing concept to evaluate survey data quality. Based on extensive searches of online databases, and an initial screening exercise to apply the study’s inclusion criteria, 141 relevant articles were identified. Guided by the theory of survey satisficing described by Krosnick (1991), the methodological features of the shortlisted articles were coded, including the indicators of satisficing analyzed, the main predictors of satisficing, and the presence of main or interaction effects on the prevalence of satisficing involving indicators of task difficulty, respondent ability, and respondent motivation. Our analysis sheds light on potential differences in the extent to which satisficing theory holds for different types of response error, and highlights a number of avenues for future research.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.