2016
DOI: 10.1016/j.acha.2016.03.006
|View full text |Cite
|
Sign up to set email alerts
|

Uniform recovery of fusion frame structured sparse signals

Abstract: We consider the problem of recovering fusion frame sparse signals from incomplete measurements. These signals are composed of a small number of nonzero blocks taken from a family of subspaces. First, we show that, by using a-priori knowledge of a coherence parameter associated with the angles between the subspaces, one can uniformly recover fusion frame sparse signals with a significantly reduced number of vector-valued (sub-)Gaussian measurements via mixed ℓ 1 /ℓ 2 -minimization. We prove this by establishing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
36
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 27 publications
(37 citation statements)
references
References 50 publications
(121 reference statements)
1
36
0
Order By: Relevance
“…Thus s < * log |Θ| = log( n/b k/b ) (k/b) log(n/k). Then as long as b = ω(log(n/k)), our results imply non-trivial column-sparsity s m. Ours is the first result yielding non-trivial sparsity in a model-RIP Φ for any model with a number of measurements qualitatively matching the optimal bound (which is on the order of m k + (k/b) log(n/k) [ADR14]). We remark that for modelbased RIP 1 , where one wants to approximately preserve 1 -norms of k-block-sparse vectors, which is useful for 1 / 1 recovery, [IR13] have shown a much better sparsity bound of O( log b (n/k) ) non-zeroes per column in their measurement matrix.…”
Section: Applicationsmentioning
confidence: 61%
“…Thus s < * log |Θ| = log( n/b k/b ) (k/b) log(n/k). Then as long as b = ω(log(n/k)), our results imply non-trivial column-sparsity s m. Ours is the first result yielding non-trivial sparsity in a model-RIP Φ for any model with a number of measurements qualitatively matching the optimal bound (which is on the order of m k + (k/b) log(n/k) [ADR14]). We remark that for modelbased RIP 1 , where one wants to approximately preserve 1 -norms of k-block-sparse vectors, which is useful for 1 / 1 recovery, [IR13] have shown a much better sparsity bound of O( log b (n/k) ) non-zeroes per column in their measurement matrix.…”
Section: Applicationsmentioning
confidence: 61%
“…We here assume that K := ∪ i∈[K] K i is a union of K low-dimensional subspaces K i ⊂ R n , i.e., a ULS model. This model encompasses, e.g., sparse signals in an orthonormal basis or in a dictionary [64,52], co-sparse signal models [65], group-sparse signals [27] and model-based sparsity [26].…”
Section: Union Of Low-dimensional Subspacesmentioning
confidence: 99%
“…3 Up to the identification of these matrices with their vector representation (see Sec. 4.2).3 or group-sparse models [27]), and in the case where ξ ∼ U m ([0, δ]) and 1 √ m Φ is generated from a random matrix distribution (see Def. 3.2) known to generate w.h.p.…”
mentioning
confidence: 99%
“…This type of sparsity is often called block-sparsity in the signal processing literature (see e.g. [10,2]…”
Section: 2mentioning
confidence: 99%