2017
DOI: 10.1177/0146621617707511
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Strategy for Bayesian Estimation of the Reduced Reparameterized Unified Model

Abstract: A Bayesian formulation for a popular conjunctive cognitive diagnosis model, the reduced reparameterized unified model (rRUM), is developed. The new Bayesian formulation of the rRUM employs a latent response data augmentation strategy that yields tractable full conditional distributions. A Gibbs sampling algorithm is described to approximate the posterior distribution of the rRUM parameters. A Monte Carlo study supports accurate parameter recovery and provides evidence that the Gibbs sampler tended to converge … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
21
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 33 publications
0
21
0
Order By: Relevance
“…Unlike Culpepper and Hudson (2017), the conjunctive condensation rule in Equation 3 is not a function of the Q matrix. Instead, we fix qjk=1 in Equation 3 to address the confound between rjk* and qjk.…”
Section: Bayesian Estimation Of E-rrummentioning
confidence: 99%
See 1 more Smart Citation
“…Unlike Culpepper and Hudson (2017), the conjunctive condensation rule in Equation 3 is not a function of the Q matrix. Instead, we fix qjk=1 in Equation 3 to address the confound between rjk* and qjk.…”
Section: Bayesian Estimation Of E-rrummentioning
confidence: 99%
“…Third, in our preliminary investigations, we found that estimating Q is more difficult whenever the associations among the attributes satisfy a more parsimonious structure. Accordingly, there may be some instances where a parsimonious model for attributes provides better fit than an unstructured model (e.g., see Culpepper & Hudson, 2017). We incorporate a higher order model (de la Torre & Douglas, 2004; Maris, 1999) for attributes to provide a more parsimonious model for the 2 K latent classes and offer a new Gibbs sampling algorithm to estimate model parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The priors are considered less informative because a large standard deviation (i.e., 20) produces a relatively flat-shaped normal distribution, and a conjugate Dirichlet distribution with all equivalent parameter values (e.g., 2,2,2,2) is approximately a uniform distribution. Using less informative priors are recommended in similar DCM studies such as Chen et al (2018), Culpepper and Hudson (2018), and Jiang and Carter (2018).…”
Section: Operational Studymentioning
confidence: 99%
“…However, the main motivation of CDMs is to identify the latent attribute profiles of examinees' and Bayesian methods are often more natural to reach the goal. The second most commonly used method is Markov chain Monte Carlo (MCMC) method (de la Torre and Douglas, 2004 ; Culpepper, 2015 ; Culpepper and Hudson, 2018 ; Zhan et al, 2018 , 2019 ; Jiang and Carter, 2019 ). Usually, to use the MH algorithm, it is necessary to choose a proposal distribution that can lead to optimal sampling efficiency.…”
Section: Introductionmentioning
confidence: 99%
“…Culpepper ( 2015 ) first introduced the Gibbs sampling to the DINA model and Zhang et al ( 2020 ) applied the Pólya-Gamma Gibbs sampling based on auxiliary variables to DINA model. Culpepper and Hudson ( 2018 ) introduced Bayesian method to the Reduced Reparameterized Unified Model (rRUM; DiBello et al, 1995 ; Roussos et al, 2007 ).…”
Section: Introductionmentioning
confidence: 99%