2020
DOI: 10.1007/s11222-020-09933-x
|View full text |Cite
|
Sign up to set email alerts
|

Likelihood-free approximate Gibbs sampling

Abstract: Likelihood-free methods such as approximate Bayesian computation (ABC) have extended the reach of statistical inference to problems with computationally intractable likelihoods. Such approaches perform well for small-to-moderate dimensional problems, but suffer a curse of dimensionality in the number of model parameters. We introduce a likelihood-free approximate Gibbs sampler that naturally circumvents the dimensionality issue by focusing on lower-dimensional conditional distributions. These distributions are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(9 citation statements)
references
References 59 publications
0
9
0
Order By: Relevance
“…We focused on approaches using neural networks for density estimation and did not compare to alternatives using Gaussian Processes (e.g., Meeds and Welling, 2014;Wilkinson, 2014). There are many other algorithms which the benchmark is currently lacking (e.g., Nott et al, 2014;Ong et al, 2018;Clarté et al, 2020;Prangle, 2019;Picchini et al, 2020;Radev et al, 2020;Rodrigues et al, 2020). Keeping our initial selection small allowed us to carefully investigate hyperparameter choices.…”
Section: Limitationsmentioning
confidence: 99%
See 1 more Smart Citation
“…We focused on approaches using neural networks for density estimation and did not compare to alternatives using Gaussian Processes (e.g., Meeds and Welling, 2014;Wilkinson, 2014). There are many other algorithms which the benchmark is currently lacking (e.g., Nott et al, 2014;Ong et al, 2018;Clarté et al, 2020;Prangle, 2019;Picchini et al, 2020;Radev et al, 2020;Rodrigues et al, 2020). Keeping our initial selection small allowed us to carefully investigate hyperparameter choices.…”
Section: Limitationsmentioning
confidence: 99%
“…In recent years, several new SBI algorithms have been developed (e.g., Prangle, 2019;Järvenpää et al, 2020;Picchini et al, 2020;Rodrigues et al, 2020;, energized, in part, by advances in probabilistic machine learning (Rezende and Mohamed, 2015;. Despite-or possibly because-of these rapid and exciting developments, it is currently difficult to assess how different approaches relate to each other theoretically and empirically: First, different studies often use different tasks and metrics for comparison, and comprehensive comparisons on multiple tasks and simulation budgets are rare.…”
Section: Introductionmentioning
confidence: 99%
“…MCMC procedures can also used to generate samples from the approximate posterior distributions [Marjoram et al, 2003, Marjoram, 2013, Vihola and Franks, 2020, and improving their applicability to high-dimensional problems is an on-going research problem [Rodrigues et al, 2020, Clarté et al, 2020.…”
Section: Recent Progress In Methods Developmentmentioning
confidence: 99%
“…Although the rejection ABC algorithm is still being used frequently as a comparison method in the ABC and LFI literature, there are few applications where it would not be beneficial to instead take advantage of more sophisticated versions of this basic algorithm [Beaumont et al, 2009, Blum, 2010, Csilléry et al, 2010, Marin et al, 2012, Moral et al, 2012, Clarté et al, 2020, Rodrigues et al, 2020. The ABC-PMC approach by Beaumont et al [2009] is an extension of the rejection ABC algorithm based on importance sampling, and aims to improve the efficiency of the procedure by retrieving a sequence of intermediate distributions.…”
Section: Abc-pmc Algorithmmentioning
confidence: 99%
“…Kousathanas et al (2016) also run a Gibbs-like ABC algorithm that assumes the availability of conditionally sufficient statistics to preserve the coherence of the algorithm. Rodrigues et al (2020) propose another Gibbs-like ABC algorithm in which the conditional distributions are approximated by regression models.…”
Section: Introductionmentioning
confidence: 99%