2014
DOI: 10.1007/s11263-014-0761-1
|View full text |Cite
|
Sign up to set email alerts
|

Structured Overcomplete Sparsifying Transform Learning with Convergence Guarantees and Applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
161
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 158 publications
(172 citation statements)
references
References 55 publications
1
161
0
Order By: Relevance
“…The chosen k0 is the one that provides the smallest individual sparsification (at sparsity s) error (i.e., solves (P1)). Thus, the optimal sparse code(s) in (P1) is equal to the block(s) of the optimalx in (P2) satisfying x k 0 ≤ s. The full proof of this result is presented elsewhere [21]. The preceding arguments establish the equivalence between the union-of-transforms model and the corresponding OCTOBOS model.…”
Section: Octobos Model and Its Learningmentioning
confidence: 74%
See 1 more Smart Citation
“…The chosen k0 is the one that provides the smallest individual sparsification (at sparsity s) error (i.e., solves (P1)). Thus, the optimal sparse code(s) in (P1) is equal to the block(s) of the optimalx in (P2) satisfying x k 0 ≤ s. The full proof of this result is presented elsewhere [21]. The preceding arguments establish the equivalence between the union-of-transforms model and the corresponding OCTOBOS model.…”
Section: Octobos Model and Its Learningmentioning
confidence: 74%
“…The objective in our algorithm being monotone decreasing and lower bounded [21], it converges. The computational cost per iteration (of sparse coding and clustering, and transform update) for learning an m × n (m = Kn) OCTOBOS transform using our algorithm scales as O(mnN ).…”
Section: Transform Update Stepmentioning
confidence: 89%
“…The proposed SOUP-DILLO (dictionary-blind) image reconstruction method outperformed standard benchmarks involving the K-SVD algorithm, as well as some other recent methods in the compressed sensing MRI application. Recent works have investigated the data-driven adaptation of alternative signal models such as the analysis dictionary [14] or transform model [4], [15], [16], [56]. While we focused on synthesis dictionary learning methodologies in this work, we plan to compare various kinds of data-driven models in future work.…”
Section: Discussionmentioning
confidence: 99%
“…This is different from the motivation for multi-class models such as in [16], [56] (or [11], [18]), where patches from different regions of an image are assumed to contain different “types” of features or textures or edges, and thus common sub-dictionaries or sub-transforms are learned for groups of patches with similar features. …”
mentioning
confidence: 95%
“…In particular, the sparse coding technique has received much attention in recent years. It has shown flexibility and capability in many applications, such as image denoising [7], [8], image super-resolution [9] and face recognition [10]. In these tasks, the input signal (e.g., image or patch) is represented as a sparse linear combination of the bases in a dictionary.…”
Section: Introductionmentioning
confidence: 99%