Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181
DOI: 10.1109/icassp.1998.681826
|View full text |Cite
|
Sign up to set email alerts
|

Signal processing with the sparseness constraint

Abstract: An overview is given of the role of the sparseness constraint in signal processing problems. It is shown that this is a fundamental problem deserving of attention. This is illustrated by describing several applications where sparseness of solution is desired. Lastly, a review is given of the algorithms that are currently available for computing sparse solutions.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
74
0

Publication Types

Select...
3
3
2

Relationship

1
7

Authors

Journals

citations
Cited by 78 publications
(75 citation statements)
references
References 25 publications
0
74
0
Order By: Relevance
“…In recent years, there has been a great deal of interest in obtaining sparse codings of y with this procedure for the overcomplete (n > m) case (Mallat & Zhang, 1993;Field, 1994). In our earlier work, we have shown that given an overcomplete dictionary, A (with the columns of A comprising the dictionary vectors), a maximum a posteriori (MAP) estimate of the source vector, x, will yield a sparse coding of y in the low-noise limit if the negative log prior, −log (P(x)), is concave/Schur-concave (CSC) (Rao, 1998;, as discussed below. For P(x) factorizable into a product of marginal probabilities, the resulting code is also known to provide an independent component analysis (ICA) representation of y.…”
Section: Stochastic Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…In recent years, there has been a great deal of interest in obtaining sparse codings of y with this procedure for the overcomplete (n > m) case (Mallat & Zhang, 1993;Field, 1994). In our earlier work, we have shown that given an overcomplete dictionary, A (with the columns of A comprising the dictionary vectors), a maximum a posteriori (MAP) estimate of the source vector, x, will yield a sparse coding of y in the low-noise limit if the negative log prior, −log (P(x)), is concave/Schur-concave (CSC) (Rao, 1998;, as discussed below. For P(x) factorizable into a product of marginal probabilities, the resulting code is also known to provide an independent component analysis (ICA) representation of y.…”
Section: Stochastic Modelsmentioning
confidence: 99%
“…FOCUSS, which stands for FOCal Underdetermined System Solver, is an algorithm designed to obtain suboptimally (and, at times, maximally) sparse solutions to the following m × n, underdetermined linear inverse problem 1 (Gorodnitsky, George, & Rao, 1995;Adler, Rao, & Kreutz-Delgado, 1996;Rao, 1997Rao, ,1998, (1.1) for known A. The sparsity of a vector is the number of zero-valued elements (Donoho, 1994), and is related to the diversity, the number of nonzero elements, Since our initial investigations into the properties of FOCUSS as an algorithm for providing sparse solutions to linear inverse problems in relatively noise-free environments (Gorodnitsky et al, 1995;Rao, 1997;Adler et al, 1996;, we now better understand the behavior of FOCUSS in noisy environments and as an interior point-like optimization algorithm for optimizing concave functionals subject to linear constraints , 1998c, 1999KreutzDelgado, Rao, Engan, Lee, & Sejnowski, 1999a;Engan, Rao, & Kreutz-Delgado, 2000;Rao, Engan, Cotter, & Kreutz-Delgado, 2002).…”
Section: Introductionmentioning
confidence: 99%
“…References [3,13,4] have proposed the use of the Shannon entropy function as a measure of diversity appropriate for sparse basis selection. Given a probability distribution, the i logx i ;x = xx 0 ; (10) the differences arise in how one definesx as a function of x. These differences affect the properties of HS as a function of x.…”
Section: Entropy Measuresmentioning
confidence: 99%
“…There has been considerable recent interest in the issue of best basis selection for sparse signal representation, including approaches that select basis vectors by minimizing diversity measures subject to the constraint Ax = b; (1) where A is an m n matrix formed using the vectors from an overdetermined dictionary of basis vectors, m n , and it is assumed that rankA = m [3,13,1,10].…”
Section: Introductionmentioning
confidence: 99%
“…Matching pursuit algorithm is a method that has been used in many sparse applications [5], [6], [7]. Recently, a parametric method for selecting the structure of sparse model is proposed by Stoica [8].…”
Section: Introductionmentioning
confidence: 99%