2016
DOI: 10.1007/s10463-016-0552-2
|View full text |Cite
|
Sign up to set email alerts
|

Improving the convergence rate and speed of Fisher-scoring algorithm: ridge and anti-ridge methods in structural equation modeling

Abstract: In structural equation modeling (SEM), parameter estimates are typically computed by the Fisher-scoring algorithm, which often has difficulty in obtaining converged solutions. Even for simulated data with a correctly specified model, nonconverged replications have been repeatedly reported in the literature. In particular, in Monte Carlo studies it has been found that larger factor loadings or smaller error variances in a confirmatory factor model correspond to a higher rate of convergence. However, studies of … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2016
2016
2024
2024

Publication Types

Select...
4
1
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 20 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…Improved Test Statistics: RGLS and RLS Arruda and Bentler (2017) proposed that the poor performance of GLS might be due to bias in the eigenvalues of 𝑺, specifically, their excess extremity (too large or too small) as compared to the eigenvalues of 𝚺. The condition number (ratio of largest to smallest eigenvalue) of 𝑺 is larger than that of 𝚺 and decreases monotonically with sample size (Yuan & Bentler, 2016). In practice, they used a method of Chi and Lange (2014) to shrink (move toward their median value) the eigenvalues of 𝑺 and used the resulting "regularized" sample covariance matrix, say Σ ̂𝑅, to replace the GLS weight matrix.…”
Section: Classical Sem Test Statistics: ML and Glsmentioning
confidence: 99%
“…Improved Test Statistics: RGLS and RLS Arruda and Bentler (2017) proposed that the poor performance of GLS might be due to bias in the eigenvalues of 𝑺, specifically, their excess extremity (too large or too small) as compared to the eigenvalues of 𝚺. The condition number (ratio of largest to smallest eigenvalue) of 𝑺 is larger than that of 𝚺 and decreases monotonically with sample size (Yuan & Bentler, 2016). In practice, they used a method of Chi and Lange (2014) to shrink (move toward their median value) the eigenvalues of 𝑺 and used the resulting "regularized" sample covariance matrix, say Σ ̂𝑅, to replace the GLS weight matrix.…”
Section: Classical Sem Test Statistics: ML and Glsmentioning
confidence: 99%
“…When a sample does not contain a sufficient number of distinct cases, the sample covariance matrix S is near singular (not full rank). Then, the iteration process for computing the NML estimates can be very unstable, and it may take literally hundreds of iterations to reach a convergence (Yuan and Bentler, 2017 ). When S is literally singular, equation (1) is not defined, and other methods for parameter estimation will likely break down as well.…”
Section: Parameter Estimationmentioning
confidence: 99%
“…Note that, with p observed variables in each study, the total number of parameters at step 1 is kp + q 1 with q 1 = p ( p − 1)/2, and at step 2 is kp + q 2 with q 2 being the number of parameters in the structural model. A potential issue with the method is that, when k or p is large, the number of parameters involved is so large that existing programs may not be able to compute the inverse of the matrix in the Fisher‐scoring method (Yuan and Bentler, ) for iteratively computing the parameter estimates. In contrast, the GLS method (Becker, ) for combining the correlation matrices only involves q 1 parameters, and the GLS method for estimating the structural parameters only involves q 2 unknown parameters.…”
Section: Comments On Five Articles Of Masem In This Special Issuementioning
confidence: 99%
“…Such a difference will cause a bias from its own population value defined by the three variables (y, x 1 , x 2 ) within the given study. existing programs may not be able to compute the inverse of the matrix in the Fisher-scoring method (Yuan and Bentler, 2016) for iteratively computing the parameter estimates. In contrast, the GLS method (Becker, 1992) for combining the correlation matrices only involves q 1 parameters, and the GLS method for estimating the structural parameters only involves q 2 unknown parameters.…”
Section: Comments On Five Articles Of Masem In This Special Issuementioning
confidence: 99%