2016
DOI: 10.1007/s11222-016-9640-7
|View full text |Cite
|
Sign up to set email alerts
|

The use of a single pseudo-sample in approximate Bayesian computation

Abstract: We analyze the computational efficiency of approximate Bayesian computation (ABC), which approximates a likelihood function by drawing pseudo-samples from the associated model. For the rejection sampling version of ABC, it is known that multiple pseudo-samples cannot substantially increase (and can substantially decrease) the efficiency of the algorithm as compared to employing a high-variance estimate based on a single pseudo-sample. We show that this conclusion also holds for a Markov chain Monte Carlo versi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
38
2

Year Published

2017
2017
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 32 publications
(43 citation statements)
references
References 23 publications
3
38
2
Order By: Relevance
“…McKinley et al (2009) demonstrated that repeating simulations of the data given θ does not seem to contribute (much) to an improved approximation of the posterior by the ABC sample. This observation appears to be consistent with the findings of Bornn et al (2016) who proved that K = 1 is usually very close to optimal. A review of ABC-MCMC algorithms can be found in Sisson and Fan (2011).…”
supporting
confidence: 92%
“…McKinley et al (2009) demonstrated that repeating simulations of the data given θ does not seem to contribute (much) to an improved approximation of the posterior by the ABC sample. This observation appears to be consistent with the findings of Bornn et al (2016) who proved that K = 1 is usually very close to optimal. A review of ABC-MCMC algorithms can be found in Sisson and Fan (2011).…”
supporting
confidence: 92%
“…Del Moral et al (2012). Bornn et al (2015) showed that M = 1 usually represents the best variance vs CPU time trade-off when using Monte Carlo sampling, however we shall see that this result does not hold when using QMC. Later on in the paper, we shall consider an alternative unbiased estimator, based on properties of the negative binomial distribution.…”
Section: Pseudo-marginal Importance Samplingmentioning
confidence: 75%
“…Mixing can generally be improved by increasing the tolerance(s), but at the cost of further information loss in the approximate posterior. Under the same assumptions as above, Bornn et al [2017] show that for a simple rejection sampling ABC algorithm, n = 1 is indeed optimal. In practice ABC-SMC samplers, such as described in Algorithm 1.1b seem to perform better for low n [see e.g.…”
Section: Increasing the Number Of Replicates (N > 1)mentioning
confidence: 82%
“…Pitt et al, 2012, Sherlock et al, 2015, Doucet et al, 2015. In the specific case of ABC-MCMC with uniform matching, Bornn et al [2017] show that setting n = 1 results in run times that are at most a factor of 2 away from the optimum choice (obtained for some n > 1). However, their results also make the assumption that simulation run times are approximately constant, which is often not true for epidemic systems, where run times for individual simulations can often vary greatly even for fixed parameter inputs.…”
Section: Increasing the Number Of Replicates (N > 1)mentioning
confidence: 96%