Psychologists have worried about the distortions introduced into standardized personality measures by social desirability bias. Survey researchers have had similar concerns about the accuracy of survey reports about such topics as illicit drug use, abortion, and sexual behavior. The article reviews the research done by survey methodologists on reporting errors in surveys on sensitive topics, noting parallels and differences from the psychological literature on social desirability. The findings from the survey studies suggest that misreporting about sensitive topics is quite common and that it is largely situational. The extent of misreporting depends on whether the respondent has anything embarrassing to report and on design features of the survey. The survey evidence also indicates that misreporting on sensitive topics is a more or less motivated process in which respondents edit the information they report to avoid embarrassing themselves in the presence of an interviewer or to avoid repercussions from third parties.
This study compared three methods of collecting survey data about sexual behaviors and other sensitive topics: computer-assisted personal interviewing (CAPI), computer-assisted self-administered interviewing (CASI), and audio computerassisted self-administered interviewing (ACASI). Interviews were conducted with an area probability sample of more than 300 adults in Cook County, Illinois. The experiment also compared open and closed questions about the number of sex partners and varied the context in which the sex partner items were embedded. The three mode groups did not differ in response rates, but the mode of data collection did affect the level of reporting of sensitive behaviors: both forms of self-administration tended to reduce the disparity between men and women in the number of sex partners reported. Self-administration, especially via ACASI, also increased the proportion of respondents admitting that they had used illicit drugs. In addition, when the closed answer options emphasized the low end of the distribution, fewer sex partners were reported than when the options emphasized the high end of the distribution; responses to the open-ended versions of the sex partner items generally fell between responses to the two closed versions. Over the past 2 decades, two trends have transformed survey data collection in the United States. The first trend has been the introduction and widespread adoption of computerized tools for surveys; these
This article proposes contemporary best-practice recommendations for stated preference (SP) studies used to inform decision making, grounded in the accumulated body of peer-reviewed literature. These recommendations consider the use of SP methods to estimate both use and non-use (passive-use) values, and cover the broad SP domain, including contingent valuation and discrete choice experiments. We focus on applications to public goods in the context of the environment and human health but also consider ways in which the proposed recommendations might apply to other common areas of application. The recommendations recognize that SP results may be used and reused (benefit transfers) by governmental agencies and nongovernmental organizations, and that all such applications must be considered. The intended result is a set of guidelines for SP studies that is more comprehensive than that of the original National Oceanic and Atmospheric Administration (NOAA) Blue Ribbon Panel on contingent valuation, is more germane to contemporary applications, and reflects the two decades of research since that time. We also distinguish between practices for which accumulated research is sufficient to support recommendations and those for which greater uncertainty remains. The goal of this article is to raise the quality of SP studies used to support decision making and promote research that will further enhance the practice of these studies worldwide.
We begin this article with the assumption that attitudes are best understood as structures in longterm memory, and we look at the implications of this view for the response process in attitude surveys. More specifically, we assert that an answer to an attitude question is the product of a fourstage process. Respondents first interpret the attitude question, determining what attitude the question is about. They then retrieve relevant beliefs and feelings. Next, they apply these beliefs and feelings in rendering the appropriate judgment. Finally, they use this judgment to select a response. All four of the component processes can be affected by prior items. The prior items can provide a framework for interpreting later questions and can also make some responses appear to be redundant with earlier answers. The prior items can prime some beliefs, making them more accessible to the retrieval process. The prior items can suggest a norm or standard of comparison for making the judgment. Finally, the prior items can create consistency pressures or pressures to appear moderate. Because of the multiple processes involved, context effects are difficult to predict and sometimes difficult to replicate. We attempt to sort out when context is likely to affect later responses and include a list of the variables that affect the size and direction of the effects of context.Within social psychology, there is an emerging consensus that attitudes are best understood as structures that reside in longterm memory (Fazio, Sanbonmatsu, Powell, & Kardes, 1986;Fazio & Williams, 1986;Tourangeau, 1984Tourangeau, , 1986Tourangeau, , 1987Tourangeau & Rasinski, 1986; Wyer&Hartwick, 1984) and are activated when the issue or object of the attitude is encountered Fazio & Williams, 1986). The conventions that have been found useful for representing other information in long-term memory ought to be useful for representing attitudes as well. In our own work, we have found it useful to represent attitudes as networks of interrelated beliefs. Although we refer to the constituents of attitudes as beliefs, we use this term loosely to encompass memories of specific experiences, general propositions, images, and feelings.J. Anderson (1983) andBower (1981) Correspondence concerning this article should be addressed to Roger Tourangeau, NORC, 1155 East 60th Street, Chicago, minois60637. on the symbols it evokes and the affect attached to these symbols.Other researchers have argued that attitudes are organized into schemata (Fiske & Dyer, 1985;Fiske & Kinder, 1981;Hastie, 1981) or stereotypes (Hamilton, 1981;Linville, 1982;Linville & Jones, 1980; see also Cantor & Mischel, 1977). But whether attitudes form network structures, schemata, stereotypes, or some combination of these, it is clear that the dimensional representation of attitude structure implicit in classical scaling techniques, such as Likert, Guttman, and Thurstone scaling, does not fully capture the important structural properties of attitudes. As we argue in this article, the structural assumpti...
Several researchers have begun this effort already. The post-survey adjustment methods applied to non-probability sampling have largely mirrored efforts in probability samples. Although this may be appropriate and effective to some extent, further consideration of selection bias mechanisms may be needed. We believe an agenda for advancing a method must include these attributes.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.