1998
DOI: 10.1111/1467-9868.00151
|View full text |Cite
|
Sign up to set email alerts
|

Wavelet Thresholding via A Bayesian Approach

Abstract: We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coef®cients of the unknown response function, designed to capture the sparseness of wavelet expansion that is common to most applications. For the prior speci®ed, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any speci®c Besov space. We establish a re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

3
427
0
5

Year Published

1999
1999
2017
2017

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 431 publications
(435 citation statements)
references
References 34 publications
3
427
0
5
Order By: Relevance
“…Standard wavelet thresholding [10] treats the coefficients with magnitudes below a certain threshold as "non significant" and sets these to zero; the remaining, "significant" coefficients are kept unmodified (hard-thresholding) or reduced in magnitude (soft-thresholding). Shrinkage estimators can also result from a Bayesian approach [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29], which imposes a prior distribution on noise-free data. Common priors for noise-free data include (generalized) Laplacians [11,18,21], alpha-stable models [20], double stochastic (Gaussian scale mixture) models [24,25] and mixtures of two distributions [13][14][15][16][17] where one distribution models the statistics of "significant" coefficients and the other one models the statistics of "insignificant" data.…”
Section: Introductionmentioning
confidence: 99%
“…Standard wavelet thresholding [10] treats the coefficients with magnitudes below a certain threshold as "non significant" and sets these to zero; the remaining, "significant" coefficients are kept unmodified (hard-thresholding) or reduced in magnitude (soft-thresholding). Shrinkage estimators can also result from a Bayesian approach [11][12][13][14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29], which imposes a prior distribution on noise-free data. Common priors for noise-free data include (generalized) Laplacians [11,18,21], alpha-stable models [20], double stochastic (Gaussian scale mixture) models [24,25] and mixtures of two distributions [13][14][15][16][17] where one distribution models the statistics of "significant" coefficients and the other one models the statistics of "insignificant" data.…”
Section: Introductionmentioning
confidence: 99%
“…This property makes the procedure useful for modeling functions with many local features like peaks. References on wavelet regression can be found in Chapters 6 and 8 of Vidakovic (1999), and in Donoho and Johnstone (1995), Chipman, Kolaczyk, and McCulloch (1997), Vidakovic (1998), Abramovich, Sapatinas, and Silverman (1998), Clyde, Parmigiani, and Vidakovic (1998), and Clyde and George (2000).…”
Section: Wavelets and Wavelet Regressionmentioning
confidence: 99%
“…Specifically, the prior for B * ijk , the wavelet coefficient at scale j and location k for fixed effect function i, was a spike-slab prior given by B * ijk = γ ijk Normal(0, τ ij ) + (1 − γ ijk )δ 0 , with γ ijk ∼ Bernoulli(π ij ) and δ 0 being a point mass at zero. This prior is commonly used in Bayesian implementations of wavelet regression, including Clyde, Parmigiani and Vidakovic (1998) and Abramovich, Sapatinas, and Silverman (1998). Use of this mixture prior causes the posterior mean estimates of the B * ijk to be nonlinearly shrunken towards zero, which results in adaptively regularized estimates of the fixed effect functions.…”
Section: Wavelet-based Modeling Of Functional Mixed Modelmentioning
confidence: 99%
“…For example, when applied to the problem of compression, the entropy of the distributions described above is significantly less than that of a Gaussian with the same variance, and this leads directly to gains in coding efficiency. In denoising, the use of this model as a prior density for images yields to significant improvements over the Gaussian model [e.g., 48,11,2,34,47]. Consider again the problem of removing additive Gaussian white noise from an image.…”
Section: The Gaussian Modelmentioning
confidence: 99%