The complex sample design requires that the selection probabilities and the field implementation be accounted for in estimating population parameters. The data set contains weights to compensate for differential probabilities of selection and response rates among demographic groups. Analysts should use weights in constructing estimates from the survey and account for the complex sample design in estimating standard errors for survey estimates.
To avoid asking respondents questions that do not apply to them, surveys often use filter questions that determine routing into follow-up items. Filter questions can be asked in an interleafed format, in which follow-up questions are asked immediately after each relevant filter, or in a grouped format, in which follow-up questions are asked only after multiple filters have been administered. Most previous investigations of filter questions have found that the grouped format collects more affirmative answers than the interleafed format. This result has been digitalcommons.unl.edu E c k m a n E t a l . i n P u b l i c O P i n i O n Q u a r t e r l y ( 2 0 1 4 ) 2 taken to mean that respondents in the interleafed format learn to shorten the questionnaire by answering the filter questions negatively. However, this is only one mechanism that could produce the observed differences between the two formats. Acquiescence, the tendency to answer yes to yes/no questions, could also explain the results. We conducted a telephone survey that linked filter question responses to high-quality administrative data to test two hypotheses about the mechanism of the format effect. We find strong support for motivated underreporting and less support for the acquiescence hypothesis. This is the first clear evidence that the grouped format results in more accurate answers to filter questions. However, we also find that the underreporting phenomenon does not always occur. These findings are relevant to all surveys that use multiple filter questions.
Administrative register data are increasingly important in statistics, but, like other types of data, may contain measurement errors. To prevent such errors from invalidating analyses of scientific interest, it is therefore essential to estimate the extent of measurement errors in administrative data. Currently, however, most approaches to evaluate such errors involve either prohibitively expensive audits or comparison with a survey that is assumed perfect.We introduce the "generalized multitrait-multimethod" (GMTMM) model, which can be seen as a general framework for evaluating the quality of admin- * The authors are indebted to Hal Stern and Jörg Drechsler for their comments as well as Barbara Felderer for her assistance in preparing the data. istrative and survey data simultaneously. This framework allows both survey and register to contain random and systematic measurement errors. Moreover, it accommodates common features of administrative data such as discreteness, nonlinearity, and nonnormality, improving similar existing models. The use of the GMTMM model is demonstrated by application to linked survey-register data from the German Federal Employment Agency on income from and duration of employment, and a simulation study evaluates the estimates obtained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.