2021
DOI: 10.1002/wics.1546
|View full text |Cite
|
Sign up to set email alerts
|

Improving the Gibbs sampler

Abstract: The Gibbs sampler is a simple but very powerful algorithm used to simulate from a complex high-dimensional distribution. It is particularly useful in Bayesian analysis when a complex Bayesian model involves a number of model parameters and the conditional posterior distribution of each component given the others can be derived as a standard distribution. In the presence of a strong correlation structure among components, however, the Gibbs sampler can be criticized for its slow convergence. Here we discuss sev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
3
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 33 publications
(41 reference statements)
0
3
0
Order By: Relevance
“…One key algorithm within the MCMC method is the Gibbs Sampling technique, as explained by [24] [25]. Gibbs Sampling simplifies complex calculations by generating random variables from the marginal distribution without the need for density calculations [26] [27]. It focuses on identifying the univariate conditional distribution, involving only one variable to be determined [28][29].…”
Section: The Markov Chain Monte Carlomentioning
confidence: 99%
“…One key algorithm within the MCMC method is the Gibbs Sampling technique, as explained by [24] [25]. Gibbs Sampling simplifies complex calculations by generating random variables from the marginal distribution without the need for density calculations [26] [27]. It focuses on identifying the univariate conditional distribution, involving only one variable to be determined [28][29].…”
Section: The Markov Chain Monte Carlomentioning
confidence: 99%
“…LDA is used to estimate the document topic distribution P(θ|d) and topic term distribution P(t|θ) using an unlabeled corpus of documents. Gibbs sampler [19] executes many times for each word ti in a document di and then samples a new subject j depending on them. Ctθ represents the number of topics, CDθ represents the number of documents containing topic assignments, T represents all subject assignments, θ-i represents all topic terms.…”
Section: Medium Level Event Detectionmentioning
confidence: 99%
“…This can be mostly done by constructing a Gibbs sampling algorithm with a blocking strategy into the consideration [303]. A general rule is that the convergence of the Gibbs sampler can be improved by grouping correlated latent variables as a single parameter block to sample from as a whole [304]. On the other hand, in the task of sampling from a non-closed form distribution, we know that a naive MH algorithm [180] requires the specification of proposal density, which can be problematic in developing software.…”
Section: Software Developmentmentioning
confidence: 99%