2016 IEEE Statistical Signal Processing Workshop (SSP) 2016
DOI: 10.1109/ssp.2016.7551743
|View full text |Cite
|
Sign up to set email alerts
|

A partially collapsed Gibbs sampler with accelerated convergence for EEG source localization

Abstract: This paper addresses the problem of designing efficient sampling moves in order to accelerate the convergence of MCMC methods. The Partially collapsed Gibbs sampler (PCGS) takes advantage of variable reordering, marginalization and trimming to accelerate the convergence of the traditional Gibbs sampler. This work studies two specific moves which allow the convergence of the PCGS to be further improved. It considers a Bayesian model where structured sparsity is enforced using a multivariate Bernoulli Laplacian … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Although this strategy can significantly decrease the complexity of the sampling process, it must be implemented with care to guarantee that the desired stationary distribution is preserved. Applications of PCGS algorithms can be found in [66][67][68].…”
Section: Alternative Ii: Eliminate the Coupling Induced By H D(σ (T) )Hmentioning
confidence: 99%
“…Although this strategy can significantly decrease the complexity of the sampling process, it must be implemented with care to guarantee that the desired stationary distribution is preserved. Applications of PCGS algorithms can be found in [66][67][68].…”
Section: Alternative Ii: Eliminate the Coupling Induced By H D(σ (T) )Hmentioning
confidence: 99%
“…Note, to ensure the Markov chains faithfully coming from Ifalse{H[T1(boldZnormalU),Z,u,γ]0false}Pfalse(Zfalse)ϕfalse(ZUfalse), it is required that the selected marginalised‐out variable(s) must always be sampled after those variables following the reduced marginal distribution, meanwhile, before variables, if any, conditioned on it. Although the mechanism proves to be able to accelerate the Markov chain mixing, performance decay is still observed in the case of high‐dimensional application, for instance, convincing evidence can be found in [35]. In a nutshell, the variety of limited number of high‐dimensional training vector samples is bound to lock locally, or equivalently, the obtained Markov chains look like being not irreducible in the limited sample pool.…”
Section: Methodologiesmentioning
confidence: 99%