2020
DOI: 10.31219/osf.io/hmy45
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

About Still Nonignorable Consequences of (Partially) Ignoring Missing Item Responses in Large-scale Assessment

Abstract: In recent literature, alternative models for handling missing item responses in large-scale assessments are proposed. In principle, based on simulations and arguments based test theory (Rose, 2013). In those approaches, it is argued that missing item responses should never be scored as incorrect, but rather treated as ignorable (e.g., Pohl et al., 2014). The present contribution shows that these arguments have limited validity and illustrates the consequences in a country comparison in the PIRLS 2011 study. A … Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 10 publications
(24 citation statements)
references
References 38 publications
(61 reference statements)
0
24
0
Order By: Relevance
“…In our opinion, the possibility to influence students' test-taking behavior poses severe threats to the validity and fairness of country comparisons. Furthermore, in our research with LSA data, we found that the conditional independence assumptions of item responses and response indicators in the SA+O model are strongly violated, resulting in a worse model fit of the SA+O model (see Robitzsch, 2020). There is empirical evidence that students who do not know the answer to an item have a high probability of omitting this item even after controlling for latent variables.…”
Section: The Role Of Test-taking Behavior In the Scaling Modelmentioning
confidence: 77%
“…In our opinion, the possibility to influence students' test-taking behavior poses severe threats to the validity and fairness of country comparisons. Furthermore, in our research with LSA data, we found that the conditional independence assumptions of item responses and response indicators in the SA+O model are strongly violated, resulting in a worse model fit of the SA+O model (see Robitzsch, 2020). There is empirical evidence that students who do not know the answer to an item have a high probability of omitting this item even after controlling for latent variables.…”
Section: The Role Of Test-taking Behavior In the Scaling Modelmentioning
confidence: 77%
“…In the literature, it is frequently argued that missing item responses should never be scored as incorrect [3,7,11,27]. However, we think that the arguments against the incorrect scoring are flawed, and simulation studies cannot show the inadequacy of the UW model (see [19][20][21]).…”
Section: Scoring Missing Item Responses As Wrongmentioning
confidence: 99%
“…In this model, the probability of responding to an item depends on the latent response propensity ξ p and the item response X pi itself (see [18,19,30,[56][57][58]). Model MM1 is defined by assuming a common δ i parameter for all items.…”
Section: Mislevy-wu Model For Nonignorable Item Responsesmentioning
confidence: 99%
See 2 more Smart Citations