2021
DOI: 10.3390/e23010123
|View full text |Cite
|
Sign up to set email alerts
|

Variationally Inferred Sampling through a Refined Bound

Abstract: In this work, a framework to boost the efficiency of Bayesian inference in probabilistic models is introduced by embedding a Markov chain sampler within a variational posterior approximation. We call this framework “refined variational approximation”. Its strengths are its ease of implementation and the automatic tuning of sampler parameters, leading to a faster mixing time through automatic differentiation. Several strategies to approximate evidence lower bound (ELBO) computation are also introduced. Its effi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(5 citation statements)
references
References 25 publications
0
5
0
Order By: Relevance
“…The first one is stochastic-gradient Markov Chain Monte Carlo (SG-MCMC) methods, which use an estimate of the gradient plus some adequately sampled noise to explore the posterior distribution [64][65][66]. On the other hand, variational Bayes approaches approximate the posterior distribution with a simpler, tractable distribution, such as a Gaussian, by solving an optimisation problem to get the best approximation [67][68][69].…”
Section: Computational Issuesmentioning
confidence: 99%
“…The first one is stochastic-gradient Markov Chain Monte Carlo (SG-MCMC) methods, which use an estimate of the gradient plus some adequately sampled noise to explore the posterior distribution [64][65][66]. On the other hand, variational Bayes approaches approximate the posterior distribution with a simpler, tractable distribution, such as a Gaussian, by solving an optimisation problem to get the best approximation [67][68][69].…”
Section: Computational Issuesmentioning
confidence: 99%
“…which is referred to as the Evidence Lower Bound Objective (ELBO) [17]. In the case of GMM, where the search model is a multivariate normal distribution, (6) becomes:…”
Section: ) Gaussian Mixture Models With Variational Inferencementioning
confidence: 99%
“…Due to its Bayesian rationale, VI needs more hyperparameters than ΕΜ, the most important of these being the concentration prior of the BGMM [17][18][19]. A common practice is to define the BGMM's prior structure according to a Dirichlet process mixture with concentration (or gamma) values equal to the inverse of the number of the Gaussian components [17][18][19]. However, in that case, a small weight concentration value with many components would make the model put most of the weights close to zero [17][18][19].…”
Section: ) Proposed Bgmm Approachmentioning
confidence: 99%
See 1 more Smart Citation
“…Recent advances on VAEs focus both on theoretical aspects, such as the improvement of the stochastic inference approach [ 22 ], and on architectural aspects, such as the use of different types of latent variables to learn local and global structures [ 23 ], the definition of hierarchical schemes [ 24 ], rather than the use of a multi-agent generator [ 25 ]. Among the most recent VAE models, we focus on the quaternion-valued variational autoencoder (QVAE), which exploits the properties of quaternion algebra to improve performance, on one hand, and to significantly reduce the overall number of network parameters [ 26 ], on the other hand.…”
Section: Introductionmentioning
confidence: 99%