2022
DOI: 10.1037/abn0000770
|View full text |Cite
|
Sign up to set email alerts
|

Model fit is a fallible indicator of model quality in quantitative psychopathology research: A reply to Bader and Moshagen.

Abstract: As evidenced by our exchange with Bader and Moshagen (2022), the degree to which model fit indices can and should be used for the purpose of model selection remains a contentious topic. Here, we make three core points. First, we discuss the common misconception about fit statistics' abilities to identify the "best model," arguing that mechanical application of model fit indices contributes to faulty inferences in the field of quantitative psychopathology. We illustrate the consequences of this practice through… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
7
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(8 citation statements)
references
References 62 publications
1
7
0
Order By: Relevance
“…This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. robust support to our findings, while attending to measurement quality of scales, which impacts model fit (Greene et al, 2022). Future research should replicate these findings in father-adolescent dyads.…”
Section: Strengths Limits and Future Directionssupporting
confidence: 80%
“…This article is intended solely for the personal use of the individual user and is not to be disseminated broadly. robust support to our findings, while attending to measurement quality of scales, which impacts model fit (Greene et al, 2022). Future research should replicate these findings in father-adolescent dyads.…”
Section: Strengths Limits and Future Directionssupporting
confidence: 80%
“…Lastly, as Greene et al (2022) [ 72 ] reported, model fit and fit measures are highly dependent on the type of data and the factor analysis method. Therefore, the deviation of our item attribution from the original ASAS structure by Sousa et al (2011) and other previous publications must be interpreted in light of the methods and participants included in the respective studies.…”
Section: Discussionmentioning
confidence: 99%
“…Therefore, the deviation of our item attribution from the original ASAS structure by Sousa et al (2011) and other previous publications must be interpreted in light of the methods and participants included in the respective studies. As Greene et al (2022) [ 72 ] indicated, model-based fit measures should not be seen as ultimate but instead be interpreted with regards to content and theoretical frameworks. In our analysis, we therefore not only looked at the best model fit but also decided that our variable attribution seems more reasonable in terms of actual content of the identified factors.…”
Section: Discussionmentioning
confidence: 99%
“…Foremost, we encourage p-factor researchers to abandon the unrestricted bifactor model and follow Waller and Meehl 80 : "Efforts to falsify theories by subjecting them to risky tests are the most efficient means of gauging a theory's mettle" (p. 331). Accordingly, researchers should drastically de-emphasize or disregard goodness-of-fit in bifactor modeling, 73,79,112 as they do when evaluating exploratory models. 113 A riskier, more rigorous test of the appropriateness of a bifactor model calls for incorporating plausible constraints, such as Bayesian constraints according to a priori theories.…”
Section: Moving Forwardmentioning
confidence: 99%
“…131 Typically associated with relatively high model fit compared with other models, but strong fit does not indicate that the model is an adequate or valid description of the data. 71,73,112 Higher-order…”
Section: Bifactormentioning
confidence: 99%