2009 IEEE 12th International Conference on Computer Vision 2009
DOI: 10.1109/iccv.2009.5459452
|View full text |Cite
|
Sign up to set email alerts
|

Non-local sparse models for image restoration

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

8
1,129
0
2

Year Published

2013
2013
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 1,429 publications
(1,139 citation statements)
references
References 27 publications
8
1,129
0
2
Order By: Relevance
“…Specifically, in traditional approaches to boundary detection, filters have been applied using linear filtering to generate dense, redundant, representations. Other work has demonstrated that sparse coding can be beneficial for image compression (e.g., Fischer et al, 2006Fischer et al, , 2007Murray and Kreutz-Delgado, 2006;Pece and Petkov, 2000), image restoration (e.g., Elad and Aharon, 2006;Hyvarinen et al, 1998;Mairal et al, 2009), and classification (e.g., Bociu and Pitas, 2004;Mairal et al, 2008;Ramirez et al, 2010;Wright et al, 2009). Here, it has been shown that sparse coding can also be used for boundary detection.…”
Section: Discussionmentioning
confidence: 95%
“…Specifically, in traditional approaches to boundary detection, filters have been applied using linear filtering to generate dense, redundant, representations. Other work has demonstrated that sparse coding can be beneficial for image compression (e.g., Fischer et al, 2006Fischer et al, , 2007Murray and Kreutz-Delgado, 2006;Pece and Petkov, 2000), image restoration (e.g., Elad and Aharon, 2006;Hyvarinen et al, 1998;Mairal et al, 2009), and classification (e.g., Bociu and Pitas, 2004;Mairal et al, 2008;Ramirez et al, 2010;Wright et al, 2009). Here, it has been shown that sparse coding can also be used for boundary detection.…”
Section: Discussionmentioning
confidence: 95%
“…The two terms in (9) are widely employed in optimization-based dictionary learning (see for example [33]- [38], and the references therein). The first term in (9) imposes an 2 fit between the model and observed data {y it }, and the second term imposes regularization on the dictionary elements {d k } k=1,K , which constitute the columns of D. For the special case in which Ω = I m , the second term in (9) reduces…”
Section: Relationship To Previous Modelsmentioning
confidence: 99%
“…The sparsity manifested via (11) is the most distinctive aspect of the proposed model, relative to previous optimization-based approaches [33]- [38]. In that work one often places shrinkage priors on the weights c i , via 1 regularization γ s i c i 1 ; in such an approach all the terms in (11) are essentially just replaced with γ s i c i 1 .…”
Section: Relationship To Previous Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the excellent performance of non-local methods [2,5], learned sparse models [8,18], and the combination of both [6,17] for random Gaussian noise, they were explored for impulse noise as well [20,22]. Non-local methods use redundant visual information within an image (i.e., self-similarity) to group similar image patches together, followed by collaborative filtering [2,5].…”
Section: Related Workmentioning
confidence: 99%