2019
DOI: 10.1787/254738dd-en
|View full text |Cite
|
Sign up to set email alerts
|

Invariance analyses in large-scale studies

Abstract: OECD Working Papers should not be reported as representing the official views of the OECD or of its member countries. The opinions expressed and arguments employed herein are those of the author(s). Working Papers describe preliminary results or research in progress by the author(s) and are published to stimulate discussion on a broad range of issues on which the OECD works. Comments on Working Papers are welcome, and may be sent to the Directorate for Education and Skills, OECD,

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(13 citation statements)
references
References 58 publications
(74 reference statements)
0
13
0
Order By: Relevance
“…Rens van de Schoot suggested that because of the dependence on priors, practitioners should conduct a sensitivity analysis before drawing substantive conclusions, i.e. estimate models with different priors and verify the robustness of the resulting claims (Lek & van de Schoot, 2019). In general, there was no consensus on how to rank models based on different priors (and thus, select the "best" priors and models): Jean-Paul Fox highlighted that criteria such as posterior predictive p values (PPP) or deviance information criteria (DIC) should not be used to compare models with the same number of parameters.…”
Section: Bayesian Approximate Invariance Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Rens van de Schoot suggested that because of the dependence on priors, practitioners should conduct a sensitivity analysis before drawing substantive conclusions, i.e. estimate models with different priors and verify the robustness of the resulting claims (Lek & van de Schoot, 2019). In general, there was no consensus on how to rank models based on different priors (and thus, select the "best" priors and models): Jean-Paul Fox highlighted that criteria such as posterior predictive p values (PPP) or deviance information criteria (DIC) should not be used to compare models with the same number of parameters.…”
Section: Bayesian Approximate Invariance Methodsmentioning
confidence: 99%
“…In response to some of these shortcomings, Jean-Paul Fox presented an alternative approach to assess whether the data support full invariance or only approximate invariance of measurements, which he illustrated in the IRT case (Fox, 2019). The approach, which was recently presented in Fox, Mulder, and Sinharay (2017), is based on the intuition that the marginal model obtained by integrating out the random parameters from a one-parameter IRT model is simply a fixed-effect model with a particular structure for the covariance of residuals.…”
Section: Bayesian Approximate Invariance Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The general idea is that an item-group effect can be detected as an additional correlation, when the assumption of measurement invariance is violated. This innovative Bayesian method for measurement invariance testing has also been discussed in Van de Vijver et al (2019) for binary data in which a comparison is made with the Mantel-Haenszel test.…”
Section: Introductionmentioning
confidence: 99%
“…In practice, it is often assumed that item parameters are equal across groups, which is denoted as invariance. The invariance concept has been very prominent in psychology and the social sciences in general [1,2]. For example, in international large-scale assessment studies in education like the programme for international student assessment (PISA), the necessity of invariance is strongly emphasized [3].…”
Section: Introductionmentioning
confidence: 99%