The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2017
DOI: 10.31234/osf.io/yxhfm
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Simple Method for Comparing Complex Models: Bayesian Model Comparison for Hierarchical Multinomial Processing Tree Models using Warp-III Bridge Sampling

Abstract: Multinomial processing trees (MPTs) are a popular class of cognitive models for categorical data. Typically, researchers compare several MPTs, each equipped with many parameters, especially when the models are implemented in a hierarchical framework. A Bayesian solution is to compute posterior model probabilities and Bayes factors. Both quantities, however, rely on the marginal likelihood, a highdimensional integral that cannot be evaluated analytically. In this case study, we show how Warp-III bridge sampling… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
8

Relationship

3
5

Authors

Journals

citations
Cited by 23 publications
(33 citation statements)
references
References 65 publications
0
33
0
Order By: Relevance
“…The other part of it was used to obtain the Savage-Dickey density ratio (Bayes factor) for model comparison. # Y=XB SSE=t((y-Y))%*%(y-Y) # e'e=(y-XB)'*(y-XB) ssqr=SSE/v # ssqr=(y-XB)*(y-XB)/v ssqr h = ssqrinv=(ssqr)^-1 h varE=matrix(c(h^-1,0,0,0,0,0,h^-1,0,0,0,0,0,h^-1,0,0,0,0,0,h^-1,0,0,0,0,0,h^-1), nrow=5, ncol=5, byrow=T) varE ### sure;checking the ols estimates of Beta ols=lm ( (Bpri, nrow=5,ncol=1) Bpri Vpri=matrix(c(29.30^2,0,0,0,0,0,50.4^2,0,0,0,0,0,900.60^2,0,0,0,0,0,600.0^2,0,0,0,0,0,50.0^2),nrow=5,ncol =5,byrow=T) Vpri Vinvpri=solve(Vpri) Vinvpri # h ~ G(ssqrinvpri,vpri), where, ssqrinvpri=s^-2 #let sigma=1000, # ssqrinvpri=h=1/sigma^2=1/1000000^2 ssqrinvpri=1/(5000)^2 ssqrinvpri ssqrpri=1/(ssqrinvpri) ssqrpri vpri=5.46 # 1% of N # noninformative prior ## deduced that prior means & var-covs are: (0,27,13.5,1.4,10.0) # B of prior (5 x 1)vector Bpri=as.matrix(Bpri, nrow=5,ncol=1) Bpri Vpri=matrix(c(29.30^2,0,0,0,0,0,50.4^2,0,0,0,0,0,900.60^2,0,0,0,0,0,600.0^2,0,0,0,0,0,50.0^2),nrow=5,ncol =5,byrow=T) Vpri Vinvpri=solve(Vpri) Vinvpri # h ~ G(ssqrinvpri,vpri), where, ssqrinvpri=s^-2 #let sigma=1000, # ssqrinvpri=h=1/sigma^2=1/1000000^2 ssqrinvpri=1/(5000)^2 ssqrinvpri ssqrpri=1/(ssqrinvpri) ssqrpri vpri=5.46 # 1% of N # noninformative prior # Initialize and run the loop current.beta = rbind (4,15,40,50,60) current.beta = as.matrix(current.beta) current.h = 1 sampled.beta0 [1] = current.beta [1,] sampled.beta1 [1] = current.beta [2,] sampled.beta2 [1] = current.beta [3,] sampled.beta3 [1] = current.beta [4,] sampled.beta4 [1] = current.beta [5,] (final.beta0,prob=TRUE,4), main="normal curve over histogram") hist(final.beta0) abline(lsfit(1:10000, sampled.beta0, intercept=FALSE), col=3) abline (a=NULL,b=NULL,h=NULL,v=NULL,reg=NULL,coef=NULL,untf=FALSE,col = 'red',lwd = 3) coda library ## Install.packages("coda") library("coda") ## codamenu() help(package="coda") ## to obtain the summary of gibbs sampled,trace plots and density curve ## for final.beta0 b0.mcmc=mcmc(final.beta0) summary(b0.mcmc) plot(b0.mcmc, col="blue") title('b0', xlab = 'mcmc', ylab = 'b0.mcmc') autocorr.plot(b0.mcmc, col="blue") effectiveSize(b0.mcmc) # ...…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The other part of it was used to obtain the Savage-Dickey density ratio (Bayes factor) for model comparison. # Y=XB SSE=t((y-Y))%*%(y-Y) # e'e=(y-XB)'*(y-XB) ssqr=SSE/v # ssqr=(y-XB)*(y-XB)/v ssqr h = ssqrinv=(ssqr)^-1 h varE=matrix(c(h^-1,0,0,0,0,0,h^-1,0,0,0,0,0,h^-1,0,0,0,0,0,h^-1,0,0,0,0,0,h^-1), nrow=5, ncol=5, byrow=T) varE ### sure;checking the ols estimates of Beta ols=lm ( (Bpri, nrow=5,ncol=1) Bpri Vpri=matrix(c(29.30^2,0,0,0,0,0,50.4^2,0,0,0,0,0,900.60^2,0,0,0,0,0,600.0^2,0,0,0,0,0,50.0^2),nrow=5,ncol =5,byrow=T) Vpri Vinvpri=solve(Vpri) Vinvpri # h ~ G(ssqrinvpri,vpri), where, ssqrinvpri=s^-2 #let sigma=1000, # ssqrinvpri=h=1/sigma^2=1/1000000^2 ssqrinvpri=1/(5000)^2 ssqrinvpri ssqrpri=1/(ssqrinvpri) ssqrpri vpri=5.46 # 1% of N # noninformative prior ## deduced that prior means & var-covs are: (0,27,13.5,1.4,10.0) # B of prior (5 x 1)vector Bpri=as.matrix(Bpri, nrow=5,ncol=1) Bpri Vpri=matrix(c(29.30^2,0,0,0,0,0,50.4^2,0,0,0,0,0,900.60^2,0,0,0,0,0,600.0^2,0,0,0,0,0,50.0^2),nrow=5,ncol =5,byrow=T) Vpri Vinvpri=solve(Vpri) Vinvpri # h ~ G(ssqrinvpri,vpri), where, ssqrinvpri=s^-2 #let sigma=1000, # ssqrinvpri=h=1/sigma^2=1/1000000^2 ssqrinvpri=1/(5000)^2 ssqrinvpri ssqrpri=1/(ssqrinvpri) ssqrpri vpri=5.46 # 1% of N # noninformative prior # Initialize and run the loop current.beta = rbind (4,15,40,50,60) current.beta = as.matrix(current.beta) current.h = 1 sampled.beta0 [1] = current.beta [1,] sampled.beta1 [1] = current.beta [2,] sampled.beta2 [1] = current.beta [3,] sampled.beta3 [1] = current.beta [4,] sampled.beta4 [1] = current.beta [5,] (final.beta0,prob=TRUE,4), main="normal curve over histogram") hist(final.beta0) abline(lsfit(1:10000, sampled.beta0, intercept=FALSE), col=3) abline (a=NULL,b=NULL,h=NULL,v=NULL,reg=NULL,coef=NULL,untf=FALSE,col = 'red',lwd = 3) coda library ## Install.packages("coda") library("coda") ## codamenu() help(package="coda") ## to obtain the summary of gibbs sampled,trace plots and density curve ## for final.beta0 b0.mcmc=mcmc(final.beta0) summary(b0.mcmc) plot(b0.mcmc, col="blue") title('b0', xlab = 'mcmc', ylab = 'b0.mcmc') autocorr.plot(b0.mcmc, col="blue") effectiveSize(b0.mcmc) # ...…”
Section: Discussionmentioning
confidence: 99%
“…This is an effective Bayesian inference which normally requires choosing of the best model for the specific situation under investigation [1]. Usually in Bayesian paradigm, models are compared using Bayes factor, [2,3,4]. Bayes factors are notoriously difficult to compute, and the Bayes factor is only defined when the marginal density of y (dependent variable) under each model is proper.…”
Section: Introductionmentioning
confidence: 99%
“…However, this method becomes inefficient when the posterior distribution is skewed. To remedy this problem, Warp-III aims to maximize the overlap by fixing the proposal distribution to a standard multivariate normal distribution 4 and then "warping" (i.e., manipulating) the posterior so that it matches not only the first two, but also the third moment of the proposal distribution (for details, see Schilling, 2002, andGronau, Wagenmakers, Heck, &Matzke, 2019). Figure 1 illustrates the warping procedure for the univariate case using hypothetical posterior samples.…”
Section: Warp-iii Bridge Samplingmentioning
confidence: 99%
“…Note that this method has not yet been implemented in a package and thus requires a customized implementation of the Monte Carlo estimate for the correction factor. As a fourth method, the marginal likelihoods in equation were approximated directly using warp‐III bridge sampling (Gronau, Wagenmakers, Heck, & Matzke, in press; Meng & Schilling, ). This method is available via the R package bridgesampling (Gronau, Singmann, & Wagenmakers, ), which only requires the fitted Stan objects of the nested and full model to approximate the Bayes factor.…”
Section: Computing Bayes Factors For Regression Parametersmentioning
confidence: 99%