2018
DOI: 10.1146/annurev-statistics-031017-100307
|View full text |Cite
|
Sign up to set email alerts
|

On p-Values and Bayes Factors

Abstract: The p-value quantifies the discrepancy between the data and a null hypothesis of interest, usually the assumption of no difference or no effect. A Bayesian approach allows the calibration of p-values by transforming them to direct measures of the evidence against the null hypothesis, so-called Bayes factors. We review the available literature in this area and consider two-sided significance tests for a point null hypothesis in more detail. We distinguish simple from local alternative hypotheses and contrast tr… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

4
201
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
6

Relationship

2
4

Authors

Journals

citations
Cited by 224 publications
(222 citation statements)
references
References 43 publications
4
201
1
Order By: Relevance
“…Another calibration, which is directly based on the two‐sided p ‐value p and can be derived along similar lines as the ‘ epnormallogfalse(pfalse)’ calibration (Held & Ott, , section 2.3), is minBFfalse(pfalse)={arraye(1p)log(1p)arrayforp<11/earray1arrayotherwise. This bound is obtained by replacing p in by q = 1 − p and will thus be called the ‘ eqnormallogfalse(qfalse)’ calibration in the sequel. Note that calibration is a much lower bound than the minimum test‐based Bayes factors and .…”
Section: Large‐sample Minimum Bayes Factorsmentioning
confidence: 99%
See 3 more Smart Citations
“…Another calibration, which is directly based on the two‐sided p ‐value p and can be derived along similar lines as the ‘ epnormallogfalse(pfalse)’ calibration (Held & Ott, , section 2.3), is minBFfalse(pfalse)={arraye(1p)log(1p)arrayforp<11/earray1arrayotherwise. This bound is obtained by replacing p in by q = 1 − p and will thus be called the ‘ eqnormallogfalse(qfalse)’ calibration in the sequel. Note that calibration is a much lower bound than the minimum test‐based Bayes factors and .…”
Section: Large‐sample Minimum Bayes Factorsmentioning
confidence: 99%
“…Note that calibration is a much lower bound than the minimum test‐based Bayes factors and . In the linear model, calibration has been shown (Held & Ott, ) to be a lower bound on the sample‐size adjusted minBF based on the g ‐prior for any sample size n ≥ d + 3, where d is the number of covariates in the model (standard regularity conditions require n ≥ d + 2). In fact, for d = n + 3, these minBFs converge from above to the bound as d → ∞ , hence the subscript ∞ in the notation above.…”
Section: Large‐sample Minimum Bayes Factorsmentioning
confidence: 99%
See 2 more Smart Citations
“…Such calibrations not only support the intuition of Fraser, Reid, and Wong () and others that an extremely low p value indicates strong evidence against the null hypothesis but also support arguments against considering p values near 0.05 as indicative of strong evidence (e.g., Sellke, Bayarri, & Berger, ). See Held and Ott () for a recent review.…”
Section: Introductionmentioning
confidence: 99%