2022
DOI: 10.31234/osf.io/62j89
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Making model judgments ROC(K)-solid: Tailored cutoffs for fit indices through simulation and ROC analysis in structural equation modeling

Abstract: Researchers commonly evaluate the fit of latent-variable models by comparing canonical fit indices (χ2, CFI, RMSEA, SRMR) against fixed cutoffs derived from simulation studies. However, the performance of fit indices varies greatly across empirical settings, and fit indices are susceptible to extraneous influences other than model misspecification. This threatens the validity of model judgments using fixed cutoffs. As a solution, methodologists have proposed four principal approaches to tailor cutoffs and the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 33 publications
0
6
0
Order By: Relevance
“…Hence, while tailor-made cutoffs promise to prevent misinterpretations and overly optimistic evaluations compared to fixed cutoffs that were derived for rather specific data conditions and model specifications, larger samples foster more extreme cutoffs and more likely result in model rejection, even if the actual misfit is negligible. In small samples though, the tailored ezCutoffs ( Schmalbach et al, 2019 ) seem to be too moderate as they tend to support candidate models whose fit would have been deemed not appropriate by common cutoffs (e.g., Hu & Bentler, 1999 ) or tailored cutoffs that take into account type II error rates (e.g, Groskurth et al, 2022 ; McNeish & Wolf, 2021 ). This shows that the ezCutoffs approach lacks power in small sample scenarios.…”
Section: Discussionmentioning
confidence: 99%
See 4 more Smart Citations
“…Hence, while tailor-made cutoffs promise to prevent misinterpretations and overly optimistic evaluations compared to fixed cutoffs that were derived for rather specific data conditions and model specifications, larger samples foster more extreme cutoffs and more likely result in model rejection, even if the actual misfit is negligible. In small samples though, the tailored ezCutoffs ( Schmalbach et al, 2019 ) seem to be too moderate as they tend to support candidate models whose fit would have been deemed not appropriate by common cutoffs (e.g., Hu & Bentler, 1999 ) or tailored cutoffs that take into account type II error rates (e.g, Groskurth et al, 2022 ; McNeish & Wolf, 2021 ). This shows that the ezCutoffs approach lacks power in small sample scenarios.…”
Section: Discussionmentioning
confidence: 99%
“…This way, a higher sample size is beneficial as the statistical power to detect meaningful deviations increases, but no discrepancies below that threshold result in rejecting the model. By evaluating the performance of the model fit indices and taking into account Type II error rates, Groskurth et al (2022) promise to assess the model fit rather independently from the actual sample size similarly to the approach of Moshagen and Erdfelder (2016) . The latter may appeal to researchers as they probably prefer choosing a critical value for close-fit (e.g., ), rather than using the ROC curve to derive a cutoff that balances Type I and Type II error rates for a given setting where an alternative model specification has to be provided as well.…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations