interviewing s Abstract Survey methodologists have drawn on and contributed to research by cognitive psychologists, conversation analysts, and others to lay a foundation for the science of asking questions. Our discussion of this work is structured around the decisions that must be made for two common types of inquiries: questions about events or behaviors and questions that ask for evaluations or attitudes. The issues we review for behaviors include definitions, reference periods, response dimensions, and response categories. The issues we review for attitudes include bipolar versus unipolar scales, number of categories, category labels, don't know filters, and acquiescence. We also review procedures for question testing and evaluation.
We consider models that underlie two proposals to estimate nonparticipation bias. The first model posits a "continuum of resistance," placing people who were interviewed during the first contact on one end of the continuum and nonparticipants on the other. The second model assumes that there are different classes of nonparticipants and that similar classes can be found among participants; it then uses groups of participants thought to be like nonparticipants to estimate the characteristics of nonparticipants. We examine the justification for these models of the relationship between participants and nonparticipants and consider how well proposed methods based on these models describe nonparticipants and the impact of nonparticipation on survey estimates. The case we analyze is estimates of means of child support awards and payments in Wisconsin. We find that neither model is successful and that the versions of the methods we use do not detect the true extent of nonparticipation error in estimates based on the unadjusted sample mean. This failure occurs both for an external measure that is not contaminated with response errors and for self-reports. But response errors, which are not considered in the models we have found in the literature, substantially worsen matters.
The present article attempts to overcome some of the problems involved in estimating race-of-interviewer effects in a nonexperimental national survey. Individual items as well as scales were examined, using General Social Survey (GSS) data. Race-of-interviewer effects large enough to justify the practice of matching interviewer and respondent race for interviews on racial topics were found for both black and white respondents. A few such effects were found for nonracial items among blacks, but the range of items involved is smaller than what has been reported in previous studies. The impact of race-of-interviewer effects on mean estimates in the GSS appears to be small for white respondents, due to the small proportion of cross-race interviews. The proportion of cross-race interviews among blacks is larger and more variable over the years, and the impact of race-of-interviewer effects should be considered when analyzing items which show these effects.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.