Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP '98 (Cat. No.98CH36181
DOI: 10.1109/icassp.1998.681831
|View full text |Cite
|
Sign up to set email alerts
|

Measures and algorithms for best basis selection

Abstract: A general framework based on majorization, Schur-concavity, and concavity is given that facilitates the analysis of algorithm performance and clarifies the relationships between existing proposed diversity measures useful for best basis selection. Admissible sparsity measures are given by the Schur-concave functions, which are the class of functions consistent with the partial ordering on vectors known as majorization. Concave functions form an important subclass of the Schur-concave functions which attain the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 23 publications
(20 citation statements)
references
References 10 publications
0
20
0
Order By: Relevance
“…2 For example, m = k log log k . 3 In this case, log m = ω(log k), and hence Ω(k log m) ⊂ Ω(k log k). Thus, n = Ω(k log k) is a better sufficient condition than n = Ω(k log m).…”
Section: B Growing Number Of Nonzero Entriesmentioning
confidence: 99%
“…2 For example, m = k log log k . 3 In this case, log m = ω(log k), and hence Ω(k log m) ⊂ Ω(k log k). Thus, n = Ω(k log k) is a better sufficient condition than n = Ω(k log m).…”
Section: B Growing Number Of Nonzero Entriesmentioning
confidence: 99%
“…No formal proof of the convergence of this algorithm to the true maximum likelihood estimate, Â ML , has been given in the prior literature, but it appears to perform well in various test cases (Olshausen & Field, 1996). Below, we discuss the problem of dictionary learning within the framework of our recently developed log-prior model-based sparse source vector learning approach that for a known overcomplete dictionary can be used to obtain sparse codes (Rao, 1998;, 1998c, 1999. Such sparse codes can be found using FOCUSS, an affine scaling transformation (AST)-like iterative algorithm that finds a sparse locally optimal MAP estimate of the source vector x for an observation y.…”
Section: Stochastic Modelsmentioning
confidence: 99%
“…However, we proceed by assuming the existence of a gradient factorization, (2.10) where α(x) is a positive scalar function and Π(x) is symmetric, positive-definite, and diagonal. As discussed in Kreutz-Delgado and Rao (1997Rao ( , 1998c and Rao and Kreutz-Delgado (1999), this assumption is generally true for CSC sparsity functions d p (·) and is key to understanding FOCUSS as a sparsity-inducing interior-point (AST-like) optimization algorithm. 5 With the gradient factorization 2.10, the stationary points of equation 2.9 are readily shown to be solutions to the (equally nonlinear and implicit) system, (2.11) (2.12) where β(x) = λα(x) and the second equation follows from identity A.18.…”
Section: The Focuss Algorithmmentioning
confidence: 99%
See 2 more Smart Citations