2016
DOI: 10.1080/10705511.2016.1154793
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Structural Equation Modeling

Abstract: A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a struc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

5
151
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 169 publications
(179 citation statements)
references
References 55 publications
5
151
0
Order By: Relevance
“…One potential solution for this could lie in the method by which the tuning parameter is selected. We used the BIC applied to the entire sample, as this has been shown to perform well (Jacobucci et al, 2016). However, given that the models are specified in the SEM framework, other fit indices are available.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One potential solution for this could lie in the method by which the tuning parameter is selected. We used the BIC applied to the entire sample, as this has been shown to perform well (Jacobucci et al, 2016). However, given that the models are specified in the SEM framework, other fit indices are available.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, the lasso has been incorporated into SEM, creating a class of techniques called regularized structural equation modeling (RegSEM; Jacobucci, Grimm, & McArdle, 2016). RegSEM uses the same logic as that of Equation 7, only instead of adding a penalty term to the residual sum of squares as is done in regression, the penalty is added to the maximum likelihood fit function.…”
Section: Exploratory Mediation Analysismentioning
confidence: 99%
“…We pruned the path analysis using goodness of fit measures (Bayesian Information criterion (BIC), chi-square test of model fit, root mean square error of approximation (RMSEA), Tucker-Lewis Index (TLI) and Comparative Fit Index (CFI)) and individual p-values until all paths in the final model were significant, and the goodness of fit measures indicated that the model fit the observed data well. We checked the pruning using lasso in the R package RegSEM 26 and found that the pruned models were consistent. We report regression coefficients with their associated standard errors and p-values.…”
Section: Methodsmentioning
confidence: 99%
“…This rationale (i.e., increased stability at the cost of some bias) is the same used for regularized regression methods such as ridge regression or lasso regression (Tibshirani, 1996), which are used in a frequentist framework but also have a Bayesian interpretation (Park & Casella, 2008). Regularization procedures have been extended to structural equation models with the goal of creating simpler models and minimizing over fitting (Jacobucci, Grimm, & McArdle, 2016; Yuan, Wu, & Bentler, 2011). The stabilizing effect of reasonable priors should also be beneficial for computational problems arising from sparse categorical data because the priors can be used to avoid improper solutions and extreme estimates.…”
Section: Estimation Approaches For Ifa Modelsmentioning
confidence: 99%
“…Regularization procedures have recently been extended to latent variable models with the goals of creating simpler models and minimizing over fitting (Jacobucci, Grimm, & McArdle, 2016; Yuan, Wu, & Bentler, 2011). In this article, I demonstrate that Bayesian estimation with reasonable prior information improves parameter estimate stability, overall variability in estimates, and power for IFA models with sparse, categorical indicators.…”
mentioning
confidence: 99%