2003
DOI: 10.1073/pnas.0437847100
|View full text |Cite
|
Sign up to set email alerts
|

Optimally sparse representation in general (nonorthogonal) dictionaries via ℓ 1 minimization

Abstract: Given a dictionary D ‫؍‬ {d ᠪ k} of vectors d ᠪ k, we seek to represent a signal S ᠪ as a linear combination S ᠪ ‫؍‬ ͚ k ␥(k)d ᠪ k, with scalar coefficients ␥(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S ᠪ has a suffi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

14
1,915
0
10

Year Published

2006
2006
2015
2015

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 2,557 publications
(1,976 citation statements)
references
References 15 publications
14
1,915
0
10
Order By: Relevance
“…The reason is that even when A is known solution of the linear system of equations (1) or (7) is not unique, because there are more unknowns (M) than equations (N). If pure components are (M-N+1)-sparse, a unique solution is obtained at the minimum of the 1  norm of s, [24][25][26][27][30][31][32][33][34]. We could formulate linear programming based solution in the time-scale basis (7).…”
Section: Linear Programming-based Solution Of the Underdetermined Sysmentioning
confidence: 99%
See 1 more Smart Citation
“…The reason is that even when A is known solution of the linear system of equations (1) or (7) is not unique, because there are more unknowns (M) than equations (N). If pure components are (M-N+1)-sparse, a unique solution is obtained at the minimum of the 1  norm of s, [24][25][26][27][30][31][32][33][34]. We could formulate linear programming based solution in the time-scale basis (7).…”
Section: Linear Programming-based Solution Of the Underdetermined Sysmentioning
confidence: 99%
“…The SCA method in [8] solves BSS problem by finding de-mixing matrix W by minimizing cost function that measures sparseness of the sources, however it still requires N=M. On the other side the SCA approach used here, and referred in [24][25][26][27], breaks down BSS problem into two separate problems: estimation of the mixing or concentration matrix A using geometric concept known as data clustering [24][25][26][27][28][29], and estimation of the magnitude spectra of the pure components (based on estimated A) by solving resulting underdetermined system of linear equations through linear programming [24,25,30,31], 1  -regularized least square problem [32,33] or 2  -regularized linear problem, [34]. In the case of the NMR spectroscopy it is customary to assume that Fourier basis yields sparse representation, however wavelet basis with properly chosen wavelet function can yield even sparser representation.…”
mentioning
confidence: 99%
“…In practice, of course, questions about the required amount of sparsity and the uniqueness of the sparsest solution arise. Donoho and Elad (2003) showed that if some x with less than m 2 nonzero entries verifies y = Tx, then this is the unique sparsest solution. This means that we have a good chance of finding the correct and unique sparsest solution to (1) even for two-class problems or for configurations in which the number of training vectors per class is not the same over all classes, provided we have enough vectors in the training set.…”
Section: Sparse Classificationmentioning
confidence: 99%
“…As there are many equivalent ways to characterize the property, we introduce the most common one: spark [31].…”
Section: Null Space Conditionmentioning
confidence: 99%
“…Theorem 2.1 [31]: For any vector ∈ , there exists at most one solution ∈ ∑ , such that = if and only if ( ) > 2s.…”
Section: Null Space Conditionmentioning
confidence: 99%