2017
DOI: 10.1080/00273171.2017.1342203
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian Estimation for Item Factor Analysis Models with Sparse Categorical Indicators

Abstract: Psychometric models for item level data are broadly useful in psychology. A recurring issue for estimating IFA models is low item endorsement (item sparseness), due to limited sample sizes or extreme items such as rare symptoms or behaviors. In this paper, I demonstrate that under conditions characterized by sparseness, currently available estimation methods, including maximum likelihood (ML), are likely to fail to converge or lead to extreme estimates and low empirical power. Bayesian estimation incorporating… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 58 publications
0
3
0
Order By: Relevance
“…Previous literature supports the finding that Bayesian methods yield higher convergence than maximum likelihood estimation in complex SEMs (e.g., Lee & Song, 2004), however, Bayesian methods with default uninformative Mplus priors can also result in efficiency losses relative to maximum likelihood estimates when the sample size is too small for the complexity of the model (Smid et al, 2020). The literature on Bayesian SEM with categorical indicators suggests that Bayesian methods can solve convergence issues encountered by weighted least squares in small samples (Liang & Yang, 2014) and by maximum likelihood estimation with sparse data (Bainter, 2017). Still, our literature review yielded no studies that examined whether categorical indicators can be treated as continuous without biasing parameter estimates in Bayesian SEM.…”
Section: Latent Interactions With Categorical Indicatorsmentioning
confidence: 99%
“…Previous literature supports the finding that Bayesian methods yield higher convergence than maximum likelihood estimation in complex SEMs (e.g., Lee & Song, 2004), however, Bayesian methods with default uninformative Mplus priors can also result in efficiency losses relative to maximum likelihood estimates when the sample size is too small for the complexity of the model (Smid et al, 2020). The literature on Bayesian SEM with categorical indicators suggests that Bayesian methods can solve convergence issues encountered by weighted least squares in small samples (Liang & Yang, 2014) and by maximum likelihood estimation with sparse data (Bainter, 2017). Still, our literature review yielded no studies that examined whether categorical indicators can be treated as continuous without biasing parameter estimates in Bayesian SEM.…”
Section: Latent Interactions With Categorical Indicatorsmentioning
confidence: 99%
“…Related problems can occur with the estimation of person parameters, where it may be the case that a unique maximum of the likelihood does not exist (Samejima, 1973; Yen et al, 1991), or the maximum of the likelihood is infinite, such as when a subject has a response pattern of all 0s or 1s. Bayesian procedures can be gainfully employed in such circumstances (Levy & Mislevy, 2016; Lord, 1986; Mislevy, 1986; see also Bainter, 2017), as highlighted by Lord (1986, p. 161):Use of Bayesian priors, even diffuse priors, has several practical advantages that are widely appreciated: 1. Ability estimates (θ^) on the θ scale are automatically restricted to a reasonable range.…”
Section: Brief Review Of Bayes’ Theoremmentioning
confidence: 99%
“…This is not to say that this perspective requires using a weakly informative prior. Researchers may adopt a moderately or even quite strongly informative prior, and still view it as augmenting the information in the likelihood (Bainter, 2017; Kruschke et al, 2012; Muthén & Asparouhov, 2012). A prior that is seen as augmenting the information in the data may be based on theoretical constraints (Levy & Crawford, 2009; Martin & McDonald, 1975), past data (de Leeuw & Klugkist, 2012), expert beliefs (Abrams et al, 1994; van de Schoot et al, 2018; Zondervan-Zwijnenburg et al, 2017), or any combination thereof.…”
Section: Brief Review Of Bayes’ Theoremmentioning
confidence: 99%