2017
DOI: 10.31222/osf.io/v94h6
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

bridgesampling: An R Package for Estimating Normalizing Constants

Abstract: Statistical procedures such as Bayes factor model selection and Bayesian model averaging require the computation of normalizing constants (e.g., marginal likelihoods). These normalizing constants are notoriously difficult to obtain, as they usually involve highdimensional integrals that cannot be solved analytically. Here we introduce an R package that uses bridge sampling (Meng and Wong 1996;Meng and Schilling 2002) to estimate normalizing constants in a generic and easy-to-use fashion. For models implemented… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
102
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 108 publications
(102 citation statements)
references
References 39 publications
0
102
0
Order By: Relevance
“…We fit our data to an extended MWC-type model including an additional free parameter (C), representing negative binding cooperativity between subunits (Figure 2-Figure supplement 2). Not surprisingly this model improved the fit to our data as assessed by the Bayes factor, which represents the marginal likelihood of one model over another to explain our observations (Wagenmakers, 2007;Gronau et al, 2017). We also tested the cooperative model using approximate leave-one-out cross validation, which assesses the ability of a model to predict new or out-of-sample data using in-sample fits.…”
Section: Discussionmentioning
confidence: 89%
See 1 more Smart Citation
“…We fit our data to an extended MWC-type model including an additional free parameter (C), representing negative binding cooperativity between subunits (Figure 2-Figure supplement 2). Not surprisingly this model improved the fit to our data as assessed by the Bayes factor, which represents the marginal likelihood of one model over another to explain our observations (Wagenmakers, 2007;Gronau et al, 2017). We also tested the cooperative model using approximate leave-one-out cross validation, which assesses the ability of a model to predict new or out-of-sample data using in-sample fits.…”
Section: Discussionmentioning
confidence: 89%
“…Where applicable, the posterior probabilities of each parameter are reported as the median and the 95% equal-tailed interval. Bayes factors were calculated using bridgesampling (Gronau et al, 2017), and Leave-One-Out (LOO) cross-validation was performed using the loo package (Vehtari et al, 2017).…”
Section: Surface Expression Assaysmentioning
confidence: 99%
“…In this specific case, the constrained EVSDT model was represented by the null hypothesis H 0 stating that the group-level σ V σ I can take a small range of values, between .99 and 1.01, and an encompassing alternative hypothesis H A that imposed no such constraint. 10 In typical settings, the use of Bayes Factors requires the computation of marginal likelihoods for (at least) two models, which can be quite challenging (but see Gronau et al, 2017). But in this specific case in which the hypotheses considered consist of nested ranges of admissible parameter values (specifically, the range of σ V σ I ), Bayes Factors can be easily computed.…”
Section: Resultsmentioning
confidence: 99%
“…As a fourth method, the marginal likelihoods in equation were approximated directly using warp‐III bridge sampling (Gronau, Wagenmakers, Heck, & Matzke, in press; Meng & Schilling, ). This method is available via the R package bridgesampling (Gronau, Singmann, & Wagenmakers, ), which only requires the fitted Stan objects of the nested and full model to approximate the Bayes factor. The R code to replicate all analyses is available in the supplementary material at the Open Science Framework (https://osf.io/5hpuc/).…”
Section: Computing Bayes Factors For Regression Parametersmentioning
confidence: 99%