2023
DOI: 10.1037/met0000554
|View full text |Cite
|
Sign up to set email alerts
|

Troubleshooting Bayesian cognitive models.

Abstract: Using Bayesian methods to apply computational models of cognitive processes, or Bayesian cognitive modeling, is an important new trend in psychological research. The rise of Bayesian cognitive modeling has been accelerated by the introduction of software that efficiently automates the Markov chain Monte Carlo sampling used for Bayesian model fitting—including the popular Stan and PyMC packages, which automate the dynamic Hamiltonian Monte Carlo and No-U-Turn Sampler (HMC/NUTS) algorithms that we spotlight here… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
11
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 114 publications
0
11
0
Order By: Relevance
“…For each of the models, we generated 4 independent chains of 2000 samples (with 1000 burn-in) from the joint posterior distribution. Following recent recommendations 75 , we use as a diagnostic of proper chain mixing, reflecting a reliable parameter estimate. statistic reflects the proportion of between-to-within chain variance.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…For each of the models, we generated 4 independent chains of 2000 samples (with 1000 burn-in) from the joint posterior distribution. Following recent recommendations 75 , we use as a diagnostic of proper chain mixing, reflecting a reliable parameter estimate. statistic reflects the proportion of between-to-within chain variance.…”
Section: Methodsmentioning
confidence: 99%
“…Following recent recommendations 75 , we use as a diagnostic of proper chain mixing, reflecting a reliable parameter estimate. statistic reflects the proportion of between-to-within chain variance.…”
Section: Methodsmentioning
confidence: 99%
“…For each model, we used four chains with 10,000 iterations each (5,000 as warm-up), yielding a total of 20,000 samples contributing to the posteriors. We checked that Rhats for all parameters were below 1.01, effective sample sizes for all parameters were at least 400, that chains were stationary and well-mixing (using trace plots), that the Bayesian fraction of missing information (BFMI) for each chain was above 0.2, and that (if possible) no divergent transitions occurred (Baribault & Collins, 2023). To minimize the occurrence of divergent transitions, we increased the target average proposal acceptance probability (adapt_delta) to 0.99.…”
Section: Methodsmentioning
confidence: 99%
“…We ran 5 parallel chains for 12000 iterations each, discarded the first 8000 warm-up samples, and kept the remaining 4000 samples as posterior estimates (20000 samples total). The Gelman-Rubin convergence diagnostic Rhat (Gelman & Rubin, 1992) was used to assess model convergence (Rhats close to 1 indicates convergence; we considered a model successfully converged with Rhats <= 1.01 (Baribault & Collins, 2023).…”
Section: Methodsmentioning
confidence: 99%