2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854989
|View full text |Cite
|
Sign up to set email alerts
|

Overcomplete sparsifying transform learning algorithm using a constrained least squares approach

Abstract: Analysis sparsity and the accompanying analysis operator learning problem provide an important framework for signal modeling. Very recently, sparsifying transform learning has been put forward as an effective and new formulation for the analysis operator learning problem. In this study, we develop a new sparsifying transform learning algorithm by using the uniform normalized tight frame constraint. The new algorithm bypasses the computationally expensive analysis sparse coding step of the standard analysis ope… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 18 publications
0
5
0
Order By: Relevance
“…In addition, we compare our algorithm MADL with some state-of-the-art algorithms, namely, AOL [27], transform K-SVD (TKSVD) [45], and constrained-least-squares sparsifying transform learning (CLS-TL) [46], all of which involve the orthogonality constraint on the analysis dictionary. The results are shown in Fig.…”
Section: Figurementioning
confidence: 99%
See 1 more Smart Citation
“…In addition, we compare our algorithm MADL with some state-of-the-art algorithms, namely, AOL [27], transform K-SVD (TKSVD) [45], and constrained-least-squares sparsifying transform learning (CLS-TL) [46], all of which involve the orthogonality constraint on the analysis dictionary. The results are shown in Fig.…”
Section: Figurementioning
confidence: 99%
“…The test images are obtained from the Yale face database 2 . We compare the denoising performance for images using a learned dictionary and a finite difference (FD) operator [47], and we compare the algorithm MADL with the advanced algorithms AOL [27], TKSVD [45], and CLS-TL [46] to assess the efficiency of the proposed algorithm. In our experiments, we set the parameter of the Douglas-Rachford algorithm to µ = 0.0002 (in Algorithm 2).…”
Section: B Image Denoising With the Learned Dictionarymentioning
confidence: 99%
“…Both of the subproblems (9a) and (9b) have exact, closed form solutions [27]. The solution for (9a) can be given as ΩX β , where • β denotes the elementwise soft thresholding operation [29]. The soft thresholded result is calculated as follows.…”
Section: Patch-wise Regularization: Updating ω Andxmentioning
confidence: 99%
“…Both (P2.1.1) and (P2.1.2) have closed form solutions [9]. (P2.1.1) is solved by simple soft thresholding [11]. The solution for (P2.1.2) involves an SVD over the matrix L −1X A H , where L is the solution toXX H + λI = LL H .…”
Section: The G-tlmri Algorithmmentioning
confidence: 99%