2019
DOI: 10.3389/fpsyg.2019.01175
|View full text |Cite
|
Sign up to set email alerts
|

The Bayesian Expectation-Maximization-Maximization for the 3PLM

Abstract: The current study proposes an alternative feasible Bayesian algorithm for the three-parameter logistic model (3PLM) from a mixture-modeling perspective, namely, the Bayesian Expectation-Maximization-Maximization (Bayesian EMM, or BEMM). As a new maximum likelihood estimation (MLE) alternative to the marginal MLE EM (MMLE/EM) for the 3PLM, the EMM can explore the likelihood function much better, but it might still suffer from the unidentifiability problem indicated by occasional extremely large item parameter e… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5

Relationship

4
1

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 37 publications
0
7
0
Order By: Relevance
“…In contrast to the BEMM algorithm for 3PLM (Guo & Zheng, 2019), there is an obvious difference between two algorithms, even though both of them belong to the mixture-modeling framework, share the similar E-step and the same name. In the 3PLM, the first M-step is to calculate the guessing parameter via a closed-form solution, and then maximize the difficulty and discrimination parameters in the second M-step.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…In contrast to the BEMM algorithm for 3PLM (Guo & Zheng, 2019), there is an obvious difference between two algorithms, even though both of them belong to the mixture-modeling framework, share the similar E-step and the same name. In the 3PLM, the first M-step is to calculate the guessing parameter via a closed-form solution, and then maximize the difficulty and discrimination parameters in the second M-step.…”
Section: Discussionmentioning
confidence: 99%
“…Next, a latent indicator variable W ij is defined to indicate whether examinee i knows the answer of item j only using the p-process (Béguin & Glas, 2001; Culpepper, 2015; Guo & Zheng, 2019). Specifically, let…”
Section: The Bayesian Modal Estimation For the 1pl-ag Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared to MCMC methods, BME is a fast calibration method, but several issues need to be explored to evaluate the utility of BME and its implementation in the mirt package. For example, Guo and Zheng (2019) found that the item parameter estimates yielded by BME were unstable for the 3PLM when changing priors of guessing parameters. Moreover, in the mirt package of version 1.30, the standard errors of all item parameters cannot be calculated as a consequence of program errors in computing the Hessian matrix when a Beta distribution prior ("expbeta" in mirt) is imposed on guessing or slipping parameters.…”
Section: Original Researchmentioning
confidence: 99%
“…The R package IRTBEMM (Guo et al, 2020) is designed to implement the family of the Bayesian Expectation-Maximization-Maximization (BEMM) algorithm to estimate unidimensional dichotomous item response models with guessing or slipping parameters. The BEMM family includes (a) the BEMM algorithm (Guo & Zheng, 2019) for three-parameter logistic (3PL) model and one-parameter logistic guessing (1PL-G) model; (b) the Bayesian Expectation-Maximization-Maximization-Maximization (BE3M) algorithm (Zhang et al, 2018) for four-parameter logistic (4PL) model and one-parameter logistic ability-based guessing (1PL-AG) model; and (c) their maximum likelihood estimation versions developed by Zheng et al (2018). Both Bayesian modal estimates and maximum likelihood estimates are available.…”
mentioning
confidence: 99%