2018
DOI: 10.1037/met0000152
|View full text |Cite
|
Sign up to set email alerts
|

Permutation randomization methods for testing measurement equivalence and detecting differential item functioning in multiple-group confirmatory factor analysis.

Abstract: In multigroup factor analysis, different levels of measurement invariance are accepted as tenable when researchers observe a nonsignificant (Δ)χ2 test after imposing certain equality constraints across groups. Large samples yield high power to detect negligible misspecifications, so many researchers prefer alternative fit indices (AFIs). Fixed cutoffs have been proposed for evaluating the effect of invariance constraints on change in AFIs (e.g., Chen, 2007; Cheung & Rensvold, 2002; Meade, Johnson, & Braddy, 20… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
68
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

1
8

Authors

Journals

citations
Cited by 55 publications
(73 citation statements)
references
References 59 publications
1
68
0
Order By: Relevance
“…It is also worth noting that the standards used in this study to evaluate equivalence in multi-group comparisons rely on conventional standards and approaches (Byrne, 2004). More recently, the use of permutation tests has been proposed as a superior method for testing metric and scalar invariance, because permutations can control Type I error rates better than the conventional approaches used in this study (Jorgensen et al, 2018;Kite et al, 2018). Combined with determination of effect sizes for metric and scalar equivalence tests (dMACS) (Nye and Drasgow, 2011), these are methodological approaches that may lead future analyses to identification of more universal results.…”
Section: Discussionmentioning
confidence: 99%
“…It is also worth noting that the standards used in this study to evaluate equivalence in multi-group comparisons rely on conventional standards and approaches (Byrne, 2004). More recently, the use of permutation tests has been proposed as a superior method for testing metric and scalar invariance, because permutations can control Type I error rates better than the conventional approaches used in this study (Jorgensen et al, 2018;Kite et al, 2018). Combined with determination of effect sizes for metric and scalar equivalence tests (dMACS) (Nye and Drasgow, 2011), these are methodological approaches that may lead future analyses to identification of more universal results.…”
Section: Discussionmentioning
confidence: 99%
“…Another limitation is that we assume that researchers have the correctly specified configural invariance model. Testing configural invariance could be challenging because its 2 statistic can be rejected for one of the following two reasons, or both: (1) the factor structure is not identical across groups, and (2) model misspecification is not relevant to measurement invariance (Jorgensen, Kite, Chen, & Short, 2018). It would be worthwhile to examine whether model misspecification could affect the relative performance of the strategies.…”
Section: Limitations and Future Directionsmentioning
confidence: 99%
“…Ideally, accessible tools will eventually follow to help researchers precisely calibrate their models to reasonable fit index cut-offs. Although new analytic techniques are already available to assist in this calibration (see Jorgensen et al, 2018), they likely require a degree of sophistication beyond what the typical everyday user is prepared to indulge. Sexual scientists therefore wishing to revise their analytic practice to take model reliability into account when evaluating model fit might consider calculating average or median standardized loadings, and using McNeish et al's Table 1 to derive rough cut-offs that may be more reasonable, until more accessible and comprehensive tools emerge.…”
Section: Precariousness Of Hu and Bentler's (1999) Cutoffs For Model mentioning
confidence: 99%
“…Complicating matters further, the subfield of basic science in psychological measurement modeling is a living, breathing area of scholarship. New and improved techniques are continuously developed and disseminated to provide researchers with increasingly sophisticated and rigorous tools for extracting psychological entities inside a person's head and representing them in a valid numeric form that a researcher can use (e.g., Jorgensen, Kite, Chen, & Short, 2018). Interdisciplinary and multidisciplinary fields that do not occasionally check in to survey these new methodological developments are therefore at risk of being on the outside looking in for best practices in psychological measurement.…”
mentioning
confidence: 99%