2005
DOI: 10.1073/pnas.0502269102
|View full text |Cite
|
Sign up to set email alerts
|

Sparse nonnegative solution of underdetermined linear equations by linear programming

Abstract: Consider an underdetermined system of linear equations y ‫؍‬ Ax with known y and d ؋ n matrix A. We seek the nonnegative x with the fewest nonzeros satisfying y ‫؍‬ Ax. In general, this problem is NP-hard. However, for many matrices A there is a threshold phenomenon: if the sparsest solution is sufficiently sparse, it can be found by linear programming. We explain this by the theory of convex polytopes. Let aj denote the jth column of A, 1 < j Յ n, let a0 ‫؍‬ 0 and P denote the convex hull of the aj. We say th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

8
463
0
2

Year Published

2008
2008
2023
2023

Publication Types

Select...
5
5

Relationship

0
10

Authors

Journals

citations
Cited by 464 publications
(473 citation statements)
references
References 16 publications
8
463
0
2
Order By: Relevance
“…We remark here, the authors of [9] proved that the following linear programming problem gives the sparsest representation available by dyadic intervals:…”
Section: Theoretical Analysismentioning
confidence: 99%
“…We remark here, the authors of [9] proved that the following linear programming problem gives the sparsest representation available by dyadic intervals:…”
Section: Theoretical Analysismentioning
confidence: 99%
“…Sparse estimation techniques in general and the ℓ 1 penalization approach we use here in particular have also received a lot of attention in various forms: variable selection using the LASSO (see Tibshirani (1996)), sparse signal representation using basis pursuit by Chen, Donoho & Saunders (2001), compressed sensing (see Donoho & Tanner (2005) and Candès & Tao (2005)) or covariance selection (see Banerjee, Ghaoui & d'Aspremont (2007)), to cite only a few examples. A recent stream of works on the asymptotic consistency of these procedures can be found in Meinshausen & Yu (2007), Candes & Tao (2007), Banerjee et al (2007), Yuan & Lin (2007) or Rothman, Bickel, Levina & Zhu (2007) among others.…”
Section: Introductionmentioning
confidence: 99%
“…Previous attempts at predicting the performance of SRU algorithms [3] have relied on theoretical results linking the likelihood of obtaining useful sparse representations of mixed signals to the low degree of coherence (i.e., the largest normalized correlation) between the columns of the dictionary matrix and the high degree of sparseness of the abundance vectors (e.g., [7,8]). While coherence can be easily computed, it provides pessimistic guarantees in most cases due to the fact that such guarantees consider all sparse signals.…”
Section: Introductionmentioning
confidence: 99%