“…The proposed MM-MH-RM is a natural extension of the mixture-modelling approach to the multidimensional scenario, and it typically differs from the original MH-RM (Cai, 2008) algorithm in that it introduces an extra latent variable to separate the probabilities of guessing, slipping, and ability-based responding, just as the modification of the BE3M (Zheng et al, 2021) to Bayesian EM (Mislevy, 1986) and the revision of Gibbs-within-Gibbs sampler (Culpepper, 2015) to the straightforward MCMC approach (Patz & Junker, 1999). In contrast to Zheng et al's (2021) BE3M, the proposed MM-MH-RM is specially designed for multidimensional tests; it adopts a MH sampler to approximate the high-dimensional integration rather than fixed one-dimensional Gaussian quadrature and then updates the item parameters via the RM method rather than Newton-Raphson iterations. Furthermore, unlike the time-consuming fully Bayesian MCMC methods, the MM-MH-RM only samples ability parameters but uses gradient, information, and priors of item parameters to obtain MAP estimates to guarantee stable accuracy and fast convergence.…”