2020
DOI: 10.3758/s13428-020-01480-7
|View full text |Cite
|
Sign up to set email alerts
|

Web-based and mixed-mode cognitive large-scale assessments in higher education: An evaluation of selection bias, measurement bias, and prediction bias

Abstract: Educational large-scale studies typically adopt highly standardized settings to collect cognitive data on large samples of respondents. Increasing costs alongside dwindling response rates in these studies necessitate exploring alternative assessment strategies such as unsupervised web-based testing. Before respective assessment modes can be implemented on a broad scale, their impact on cognitive measurements needs to be quantified. Therefore, an experimental study on N = 17,473 university students from the Ger… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
11
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 15 publications
(15 citation statements)
references
References 60 publications
1
11
1
Order By: Relevance
“…Furthermore, 7.36% of test‐takers were nonnative German speakers (64.7% = female). For in‐depth information on the sample and the sampling process, see Zinn, Steinhauer, and Aßmann (2017) or Zinn, Landrock, and Gnambs (2020).…”
Section: Methodsmentioning
confidence: 99%
“…Furthermore, 7.36% of test‐takers were nonnative German speakers (64.7% = female). For in‐depth information on the sample and the sampling process, see Zinn, Steinhauer, and Aßmann (2017) or Zinn, Landrock, and Gnambs (2020).…”
Section: Methodsmentioning
confidence: 99%
“…The two presented setting subsamples therefore were further split into a 2×2 factorial design ( N proctored_earlier = 318, N proctored_later = 306, N unproctored_earlier = 2,532, and N unproctored_later = 2,374). While bias due to participation rates may appear possible (e.g., because certain subgroups are more likely to opt-out of either the proctored or unproctored setting), prior research could show that it is unlikely that the unproctored setting results in a substantially biased sample compared to the proctored setting for this specific study (Zinn et al, 2021). For this reason, it appears reasonable to assume that both subgroups resemble each other in composition and that differences in rapid-guessing behavior are induced by setting and position effects instead of outside criteria.…”
Section: Methodsmentioning
confidence: 96%
“…Furthermore, the study assumes that differences in rapid-guessing behavior are mediated by setting and test position. While prior research (Zinn et al, 2021) and the applied sampling process (random selection of universities) support this notion, future research should consider how rapid guessing could be experimentally induced and more tightly controlled, for example, through instruction or incentivization. Another limitation lies in the treatment of R RG and RT RG .…”
Section: Limitationsmentioning
confidence: 99%
“…While a mixedmode assessments (i.e., the combination of a variety of survey modes) is becoming increasingly popular and can already be considered common practice (e.g., Dumont et al, 2019;Hübner et al, 2017;von Keyserlingk et al, 2020), the context of study participation may have influenced responses and, thus, the results yielded. In line with this reasoning, recent literature shows that unsupervised web-based study participation is not strictly equivalent to other assessment modes; although biases introduced by web-based testing were generally small (Zinn et al, 2021).…”
mentioning
confidence: 82%