2020
DOI: 10.48550/arxiv.2005.08334
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Marginal likelihood computation for model selection and hypothesis testing: an extensive review

Abstract: This is an up-to-date introduction to, and overview of, marginal likelihood computation for model selection and hypothesis testing. Computing normalizing constants of probability models (or ratio of constants) is a fundamental issue in many applications in statistics, applied mathematics, signal processing and machine learning. This article provides a comprehensive study of the state-of-the-art of the topic. We highlight limitations, benefits, connections and differences among the different techniques. Problem… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 23 publications
(53 citation statements)
references
References 60 publications
0
53
0
Order By: Relevance
“…where (y|x) is the likelihood function, g(x) is the prior pdf, and Z(y) is the model evidence (a.k.a. marginal likelihood) which is a useful quantity in model selection problems [24]. For simplicity, in the following, we skip the dependence on y in p(x) = p(x|y) and Z = Z(y).…”
Section: Bayesian Inferencementioning
confidence: 99%

Optimality in Noisy Importance Sampling

Llorente,
Martino,
Read
et al. 2022
Preprint
Self Cite
“…where (y|x) is the likelihood function, g(x) is the prior pdf, and Z(y) is the model evidence (a.k.a. marginal likelihood) which is a useful quantity in model selection problems [24]. For simplicity, in the following, we skip the dependence on y in p(x) = p(x|y) and Z = Z(y).…”
Section: Bayesian Inferencementioning
confidence: 99%

Optimality in Noisy Importance Sampling

Llorente,
Martino,
Read
et al. 2022
Preprint
Self Cite
“…Then, in the second part (Section 5), we also estimate the marginal posterior p(σ|y). Finally, using (12), we can obtain a final approximation of the complete posterior p(θ, σ|y). Estimations of Z(σ) and Z are also obtained.…”
Section: Problem Statementmentioning
confidence: 99%
“…However, the user should decide a temperature schedule, i.e., a decreasing rule for the scale parameter, which is usually chosen in an heuristic way. In the literature, the tempering procedure has gained a particular attention for the estimation of the marginal likelihood (a.k.a., Bayesian model evidence) [9,11,12]. Furthermore, the joint inference of parameters (denoted as θ) of observation models, f(θ), and scale parameters of the likelihood function (that, in the scalar case, is usually denoted as σ) can be a hard task.…”
Section: Introductionmentioning
confidence: 99%
“…Finally, in order to aid truly Bayesian model selection of priors, it would be useful to be able to estimate marginal likelihoods from our Markov chains. This is generally a challenging problem, but there are a few promising solutions [20,21]. With these estimates, it could even be possible to learn useful BNN priors entirely from scratch [22].…”
Section: Limitationsmentioning
confidence: 99%