2011
DOI: 10.1002/cjs.10109
|View full text |Cite
|
Sign up to set email alerts
|

A predictive approach to measuring the strength of statistical evidence for single and multiple comparisons

Abstract: The normalized maximum likelihood (NML) is a recent penalized likelihood that has properties that justify defining the amount of discrimination information (DI) in the data supporting an alternative hypothesis over a null hypothesis as the logarithm of an NML ratio, namely, the alternative hypothesis NML divided by the null hypothesis NML. The resulting DI, like the Bayes factor but unlike the P‐value, measures the strength of evidence for an alternative hypothesis over a null hypothesis such that the probabil… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
24
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
7

Relationship

4
3

Authors

Journals

citations
Cited by 16 publications
(25 citation statements)
references
References 38 publications
(39 reference statements)
1
24
0
Order By: Relevance
“…Thus, in our data set, the NML ratio tends to estimate the Bayes factor almost as accurately as methods that simultaneously use information across GO terms. While we do not expect the same for all data sets, we note that similar results have been found for an application of a modified NML [29] to a proteomics data set [30]. …”
Section: Discussionsupporting
confidence: 79%
See 2 more Smart Citations
“…Thus, in our data set, the NML ratio tends to estimate the Bayes factor almost as accurately as methods that simultaneously use information across GO terms. While we do not expect the same for all data sets, we note that similar results have been found for an application of a modified NML [29] to a proteomics data set [30]. …”
Section: Discussionsupporting
confidence: 79%
“…where trueθ^i(ti|si) is a Type I MLE with respect to scriptℋ under observed values T i given S i [28,29]. …”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Let P + denote the combination of the distributions inP with truth constrained byṖ. IfṖ ∩P = ∅, then P + = centṖ ∩P = P WṖ ∩P , (9) where centṖ ∩P is the centroid ofṖ ∩P, and WṖ ∩P is the weighting distribution induced byṖ ∩P, as defined by Eq. (4).…”
Section: Distribution-combination Gamementioning
confidence: 99%
“…Given the ubiquity of recognizable subsets (Buehler and Feddersen, 1963;Bondar, 1977), this strategy uses pre-data confidence as an approximation to post-data confidence in the sense in which expected Fisher information approximates observed Fisher information (Efron and Hinkley, 1978), aiming not at exact inference but at a pragmatic use of the limited resources available for any particular data analysis. Certain situations may instead call for careful applications of conditional inference (Goutis and Casella, 1995;Sundberg, 2003;Fraser, 2004) or of minimum description length (Bickel, 2011b) for basing decisions more directly on the data actually observed.…”
Section: Motivationmentioning
confidence: 99%