2010
DOI: 10.1109/tsp.2010.2055562
|View full text |Cite
|
Sign up to set email alerts
|

A Hierarchical Bayesian Model for Frame Representation

Abstract: In many signal processing problems, it may be fruitful to represent the signal under study in a frame. If a probabilistic approach is adopted, it becomes then necessary to estimate the hyper-parameters characterizing the probability distribution of the frame coefficients. This problem is difficult since in general the frame synthesis operator is not bijective. Consequently, the frame coefficients are not directly

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
22
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
6
3

Relationship

6
3

Authors

Journals

citations
Cited by 30 publications
(22 citation statements)
references
References 59 publications
0
22
0
Order By: Relevance
“…Other common choices can be found for instance in [3,4]. Moreover, Ψ(V·) is related to some prior knowledge one can have about x, and V ∈ R M×N is a linear transform that can describe, for example, a frame analysis [5] or a discrete gradient operator [6]. Within a Bayesian framework, it is related to a prior distribution of density p(x) whose logarithm is given by log p(x) = −Ψ(Vx).…”
Section: Introductionmentioning
confidence: 99%
“…Other common choices can be found for instance in [3,4]. Moreover, Ψ(V·) is related to some prior knowledge one can have about x, and V ∈ R M×N is a linear transform that can describe, for example, a frame analysis [5] or a discrete gradient operator [6]. Within a Bayesian framework, it is related to a prior distribution of density p(x) whose logarithm is given by log p(x) = −Ψ(Vx).…”
Section: Introductionmentioning
confidence: 99%
“…This also raises the question of the computation of the proximity operators associated with the different functions involved in the criterion. Various strategies were proposed in order to address the first question [26,27,28,29,30], but the computational cost of these methods is often high, especially when several regularization parameters have to be set. Alternatively, it has been recognized for a long time that incorporating constraints directly on the solutions [31,32,33,34,35] instead of considering regularized functions may often facilitate the choice of the involved parameters.…”
Section: Introductionmentioning
confidence: 99%
“…sparse and clustered. In this subsection, both sparsity and cluster prior are simultaneously modeled through a "spike-and-slab" prior model, also called Bernoulli-Gaussian process [23,24,25], which has been widely used as a sparse promoting prior [26,27,28,29].…”
Section: A Priori Model On Sparsity and Clustermentioning
confidence: 99%