2019
DOI: 10.1111/rssb.12316
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Importance Tempering and Bayesian Variable Selection

Abstract: Summary We propose a Monte Carlo algorithm to sample from high dimensional probability distributions that combines Markov chain Monte Carlo and importance sampling. We provide a careful theoretical analysis, including guarantees on robustness to high dimensionality, explicit comparison with standard Markov chain Monte Carlo methods and illustrations of the potential improvements in efficiency. Simple and concrete intuition is provided for when the novel scheme is expected to outperform standard schemes. When a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
45
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 33 publications
(47 citation statements)
references
References 42 publications
2
45
0
Order By: Relevance
“…Unfortunately both the Bayes factors and the marginal relevance assessment have difficulties that make them unsatisfactory in our opinion. Firstly, the posterior inference via MCMC for multimodal posterior resulting from one of the sparsifying priors can be a challenge for high-dimensional feature spaces, albeit sophisticated sampling techniques can alleviate this problem (see, e.g., Zanella and Roberts, 2019). Secondly, for large number of features p the Bayes factors typically have high Monte Carlo errors due to the fact that only a vanishingly small proportion of the 2 p models is visited during MCMC, and almost all models are not visited at all.…”
Section: Bayes Factors and Marginal Posterior Relevance Assessmentmentioning
confidence: 99%
“…Unfortunately both the Bayes factors and the marginal relevance assessment have difficulties that make them unsatisfactory in our opinion. Firstly, the posterior inference via MCMC for multimodal posterior resulting from one of the sparsifying priors can be a challenge for high-dimensional feature spaces, albeit sophisticated sampling techniques can alleviate this problem (see, e.g., Zanella and Roberts, 2019). Secondly, for large number of features p the Bayes factors typically have high Monte Carlo errors due to the fact that only a vanishingly small proportion of the 2 p models is visited during MCMC, and almost all models are not visited at all.…”
Section: Bayes Factors and Marginal Posterior Relevance Assessmentmentioning
confidence: 99%
“…If said integrals can be obtained quickly, one can often use relatively simple algorithms to explore effectively the model space. For example, one may rely on the fast convergence of Metropolis–Hastings moves when posterior model probabilities concentrate (Yang et al., 2016), sequential Monte Carlo methods to lower the cost of model search (Schäfer & Chopin, 2013), tempering strategies to explore model spaces with strong multi‐modalities (Zanella & Roberts, 2019), or adaptive Markov Chain Monte Carlo to reduce the effort in exploring low posterior probability models (Griffin et al., 2020). Unfortunately, except for very specific settings such as Gaussian regression under conjugate priors, the integrated likelihood has no closed‐form, which seriously hampers scaling computations to even moderate dimensions.…”
Section: Figurementioning
confidence: 99%
“…We note that if there is strong correlation between any two covariates, the coordinate‐wise updates, in Lines 2 to 5, might lead to slow mixing. In such situations, for example, the tempered Gibbs sampler as proposed in Zanella and Roberts (2019) can be applied. However, we note that it is not feasible to apply the tempered Gibbs sampler to all steps, because the density pfalse(σ120.3emfalse|0.3embold-italicβ,boldz,boldy,Xfalse) , in Line 7, cannot be evaluated analytically (see Section 5.4).…”
Section: Estimation Of Model Probabilitiesmentioning
confidence: 99%