2013
DOI: 10.1093/mnras/stt2190
|View full text |Cite
|
Sign up to set email alerts
|

Comparison of sampling techniques for Bayesian parameter estimation

Abstract: The posterior probability distribution for a set of model parameters encodes all that the data have to tell us in the context of a given model; it is the fundamental quantity for Bayesian parameter estimation. In order to infer the posterior probability distribution we have to decide how to explore parameter space. Here we compare three prescriptions for how parameter space is navigated, discussing their relative merits. We consider Metropolis-Hasting sampling, nested sampling and affine-invariant ensemble MCM… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
40
0
1

Year Published

2014
2014
2021
2021

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 63 publications
(41 citation statements)
references
References 29 publications
0
40
0
1
Order By: Relevance
“…Convergence of the samples was ensured by letting the random walks proceed for multiple integrated auto-correlation times τ int after removing all samples drawn during the initial burn-in period. This procedure was advocated by Akeret et al (2013) and Allison & Dunkley (2014), who provided a detailed discussion of convergence diagnostics with the auto-correlation times described above.…”
Section: Efficient Statistical Samplingmentioning
confidence: 99%
“…Convergence of the samples was ensured by letting the random walks proceed for multiple integrated auto-correlation times τ int after removing all samples drawn during the initial burn-in period. This procedure was advocated by Akeret et al (2013) and Allison & Dunkley (2014), who provided a detailed discussion of convergence diagnostics with the auto-correlation times described above.…”
Section: Efficient Statistical Samplingmentioning
confidence: 99%
“…In future, we intend to extend the functionality to include nested sampling [27] which will improve the robustness of the evidence calculation (see, e.g., Ref. [28] for a comparison).…”
Section: Glitching Vs Standard-cw Bayes Factormentioning
confidence: 99%
“…Ellipsoidal sampling [3] replaces the iso-likelihood surface L ¼ L n by a hyper-ellipsoid given by the covariance matrix of the living samples and centered in their mean value, and L new 4 L n is sampled within the intersection of the domain of integration and this ellipsoidal boundary [7]. When the integrand in (3) is a multimodal function, a possible strategy is partitioning the set of all living points in clusters, and then enclose each cluster in a "small" hyper-ellipsoid.…”
Section: Ellipsoidal Nested Samplingmentioning
confidence: 99%