2014
DOI: 10.1109/tit.2014.2300092
|View full text |Cite
|
Sign up to set email alerts
|

On the Theorem of Uniform Recovery of Random Sampling Matrices

Abstract: We consider two theorems from the theory of compressive sensing. Mainly a theorem concerning uniform recovery of random sampling matrices, where the number of samples needed in order to recover an s-sparse signal from linear measurements (with high probability) is known to be m s(ln s) 3 ln N . We present new and improved constants together with what we consider to be a more explicit proof. A proof that also allows for a slightly larger class of m × N -matrices, by considering what we call low entropy. We also… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
55
0
1

Year Published

2014
2014
2022
2022

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 57 publications
(57 citation statements)
references
References 19 publications
1
55
0
1
Order By: Relevance
“…To prove that a sufficiently small RIP in levels constant implies an equation of the form (3.2), it is natural to adapt the steps used in [5] to prove Theorem 1.3. This adaptation yields a sufficient condition for recovery even in the noisy case.…”
Section: Resultsmentioning
confidence: 99%
“…To prove that a sufficiently small RIP in levels constant implies an equation of the form (3.2), it is natural to adapt the steps used in [5] to prove Theorem 1.3. This adaptation yields a sufficient condition for recovery even in the noisy case.…”
Section: Resultsmentioning
confidence: 99%
“…2 We show that the tail result based on H P (s) provides a tighter upper bound on the largest eigenvalue of a matrix i.d. series 2 In general, the tail inequality P{ξ > t} describes the probability characteristics of the event in which the value of a random variable ξ is greater than a given positive constant t. Consequently, the tail inequality provides more useful information in the case of Rt ρ(σ 2 +V ) > 0.8831 than in the case of Rt ρ(σ 2 +V ) ≤ 0.8831. than is possible with the Bernstein-type result when the matrix dimension is high. The results regarding Q(s) and H P (s) are applicable for any Bennett-type concentration inequality that involves the function Q(s).…”
Section: A Overview Of the Main Resultsmentioning
confidence: 90%
“…if ρ = 1 and α > 1/2). Note also that the classical condition on the RIP constant has been improved several times [9,1,7,8,22,23,33]; although a version of Theorem 1 and the main results of this paper can likely be extended to the theory of these works, we do not pursue such refinements here.…”
Section: Existing Results For ℓ 1 -Minimization and Weighted ℓ 1 -Minmentioning
confidence: 99%