2021
DOI: 10.3389/fpsyg.2021.615162
|View full text |Cite
|
Sign up to set email alerts
|

A Comparison of Penalized Maximum Likelihood Estimation and Markov Chain Monte Carlo Techniques for Estimating Confirmatory Factor Analysis Models With Small Sample Sizes

Abstract: With small to modest sample sizes and complex models, maximum likelihood (ML) estimation of confirmatory factor analysis (CFA) models can show serious estimation problems such as non-convergence or parameter estimates outside the admissible parameter space. In this article, we distinguish different Bayesian estimators that can be used to stabilize the parameter estimates of a CFA: the mode of the joint posterior distribution that is obtained from penalized maximum likelihood (PML) estimation, and the mean (EAP… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(7 citation statements)
references
References 132 publications
(186 reference statements)
0
7
0
Order By: Relevance
“…Because the optimization procedure is iterative, it is possible that no (local or global) solution that minimizes a given discrepancy function is found at all. This is referred to as “nonconvergence.” Nonconvergence is observed most often in settings where the sample size is (very) small, even if the model is correctly specified (Anderson & Gerbing, 1984; Boomsma, 1985; Yuan & Bentler, 1997), and remains a vulnerability even if some solutions are available (De Jonckere & Rosseel, 2022; Lüdtke et al, 2021). If a solution is found, some parameters may be out-of-range with estimates outside the boundary of the parameter space.…”
Section: The Sem Frameworkmentioning
confidence: 99%
“…Because the optimization procedure is iterative, it is possible that no (local or global) solution that minimizes a given discrepancy function is found at all. This is referred to as “nonconvergence.” Nonconvergence is observed most often in settings where the sample size is (very) small, even if the model is correctly specified (Anderson & Gerbing, 1984; Boomsma, 1985; Yuan & Bentler, 1997), and remains a vulnerability even if some solutions are available (De Jonckere & Rosseel, 2022; Lüdtke et al, 2021). If a solution is found, some parameters may be out-of-range with estimates outside the boundary of the parameter space.…”
Section: The Sem Frameworkmentioning
confidence: 99%
“…Thus, an ML estimate of zero can be interpreted as "the data suggested that persons did not differ much". As the standard error is zero when a variance is fixed to zero, we suggest that the standard error from Mplus' default procedure should be used for inferential purposes; see [17] for a similar recommendation in the context of penalized estimation. Alternatively, one can adopt a resampling technique, such as the jackknife procedure.…”
Section: Discussion and Recommendationsmentioning
confidence: 99%
“…A prominent strategy for addressing negatively estimated variances that is often used in research practice is equal to the variance to zero and fits the model again to obtain the Psych 2022, 4 estimates of the remaining model parameters for examples of this practice in psychological research, see, e.g., [12][13][14]. Another strategy is to use a nonnegativity constraint and thus constrained estimation e.g., [15,16] or penalized/Bayesian estimation e.g., [17] to force the variance estimate to be equal to or greater than zero. All these strategies have in common that they lead to variance estimates that are "admissible" (i.e., nonnegative values for variances).…”
Section: Introductionmentioning
confidence: 99%
“…The inequality constraint par > 0.01 was used where par is a loading or variance parameter. Previous research has shown that constrained (ML) estimation is preferable to unconstrained (ML) estimation in small samples because it solves convergence issues and substantially reduces variability in estimates [25,26].…”
Section: Simulation Studiesmentioning
confidence: 99%