2011
DOI: 10.1016/j.sigpro.2010.08.009
|View full text |Cite
|
Sign up to set email alerts
|

Enhanced sampling schemes for MCMC based blind Bernoulli–Gaussian deconvolution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
70
0
1

Year Published

2012
2012
2022
2022

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(72 citation statements)
references
References 11 publications
1
70
0
1
Order By: Relevance
“…Most Bayesian BD approaches exploiting sparsity use a Bernoulli-Gaussian prior for the sparse sequence a, i.e., the b k are independent and Bernoulli distributed and the nonzero a k are Gaussian distributed [7], [8], [23]- [26]. However, here we will use a modified Bernoulli-Gaussian prior incorporating a hard minimum distance constraint that requires the temporal distance between any two nonzero a k (or two nonzero indicators b k = 1) to be not smaller than some prescribed minimum distance.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Most Bayesian BD approaches exploiting sparsity use a Bernoulli-Gaussian prior for the sparse sequence a, i.e., the b k are independent and Bernoulli distributed and the nonzero a k are Gaussian distributed [7], [8], [23]- [26]. However, here we will use a modified Bernoulli-Gaussian prior incorporating a hard minimum distance constraint that requires the temporal distance between any two nonzero a k (or two nonzero indicators b k = 1) to be not smaller than some prescribed minimum distance.…”
Section: Introductionmentioning
confidence: 99%
“…The Gibbs sampler is a simple and widely used MCMC method with interesting properties for BD [9], [24], [26], [31]; however, it is computationally inefficient when there are strong dependencies among the parameters [23], [32]. Such dependencies are caused by our minimum distance constraint, since a nonzero indicator b k determines all indicators within a certain neighborhood to be zero.…”
Section: Introductionmentioning
confidence: 99%
“…Regarding the choice of the prior, a popular approach consists in modeling z as a continuous random variable whose distribution has a sharp peak at zero and heavy tails (e.g., Cauchy [22], Laplace [23], [24], t-Student Bernoulli-Gaussian (BG) models exist. A first approach, as considered in [27], [30], [31], [34], consists in assuming that the elements of z are independently drawn from Gaussian distributions whose variances are controlled by Bernoulli variables: a small variance enforces elements to be close to zero whereas a large one defines a non-informative prior on non-zero coefficients. Another model on z based on BG variables is as follows: the elements of the sparse vector are defined as the multiplication of Gaussian and Bernoulli variables.…”
Section: A Standard Sparse Representation Algorithmsmentioning
confidence: 99%
“…After convergence of the procedure defined in (36)- (41), probabilities q(x i , s i ) correspond to a meanfield approximation of p(x i , s i |y) (see (34)). Coming back to problem (19), an approximation of p(s i |y)…”
Section: Particularized To Model (5)-(6)-(8)mentioning
confidence: 99%
See 1 more Smart Citation