Educational large-scale studies typically adopt highly standardized settings to collect cognitive data on large samples of respondents. Increasing costs alongside dwindling response rates in these studies necessitate exploring alternative assessment strategies such as unsupervised web-based testing. Before respective assessment modes can be implemented on a broad scale, their impact on cognitive measurements needs to be quantified. Therefore, an experimental study on N = 17,473 university students from the German National Educational Panel Study has been conducted. Respondents were randomly assigned to a supervised paper-based, a supervised computerized, and an unsupervised web-based mode to work on a test of scientific literacy. Mode-specific effects on selection bias, measurement bias, and predictive bias were examined. The results showed a higher response rate in web-based testing as compared to the supervised modes, without introducing a pronounced mode-specific selection bias. Analyses of differential test functioning showed systematically larger test scores in paper-based testing, particularly among low to medium ability respondents. Prediction bias for web-based testing was observed for one out of four criteria on study-related success factors. Overall, the results indicate that unsupervised web-based testing is not strictly equivalent to other assessment modes. However, the respective bias introduced by web-based testing was generally small. Thus, unsupervised web-based assessments seem to be a feasible option in cognitive large-scale studies in higher education.
Abstract. This paper examines differences between real survey data and data falsified by interviewers. Previous studies show that there are only small differences between real and falsified data which implies that falsifying interviewers are able to (re-)produce realistic frequency distributions. The question this paper aims to answer is whether they are also able to produce multivariate results in accordance with the assumptions of established social science approaches. As an example for a realistic theory-driven data analysis, real and falsified data are compared in terms of the identified determinants of political participation. I use an experimental data set with data partly collected in real interviews and partly by interviewers being instructed to falsify; that is, to fill in the questionnaire based on little information about the respondent. The questionnaire measures twelve political activities, based on which I calculate an index for political participation. There are differences in the models between the real and the falsified data: The explained variances are higher in the regression models of the falsified data. There are some variables significant in both data sets and some that are significant only in the real or in the falsified data. These differences can be explained by our theoretical assumptions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.