2022
DOI: 10.1137/21m1459812
|View full text |Cite
|
Sign up to set email alerts
|

Compressive Learning for Patch-Based Image Denoising

Abstract: The Expected Patch Log-Likelihood algorithm (EPLL) and its extensions have shown good performances for image denoising. The prior model used by EPLL is usually a Gaussian Mixture Model (GMM) estimated from a database of image patches. Classical mixture model estimation methods face computational issues as the high dimensionality of the problem requires training on large datasets. In this work, we adapt a compressive statistical learning framework to carry out the GMM estimation. With this method, called sketch… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(6 citation statements)
references
References 62 publications
0
6
0
Order By: Relevance
“…However, GMMs have a limited expressiveness and can only hardly approximate complicated probability distributions induced by patches [23]. Further, the reconstruction process proposed in [93] is computationally very costly even though a reduction of the computational effort was considered in several papers [61,72]. Indeed, we will show in our numerical examples that the patchNR clearly outperforms the reconstructions from EPLL.…”
Section: Reconstruction With Patchnrsmentioning
confidence: 85%
See 1 more Smart Citation
“…However, GMMs have a limited expressiveness and can only hardly approximate complicated probability distributions induced by patches [23]. Further, the reconstruction process proposed in [93] is computationally very costly even though a reduction of the computational effort was considered in several papers [61,72]. Indeed, we will show in our numerical examples that the patchNR clearly outperforms the reconstructions from EPLL.…”
Section: Reconstruction With Patchnrsmentioning
confidence: 85%
“…In particular, [93] proposed the negative log likelihood of all patches of an image as a regularizer, where the underlying patch distribution was assumed to follow a Gaussian mixture model (GMM) which parameters were learned from few clean images. This method is still competitive to many approaches based on deep learning and several extensions were suggested recently [22,61,72]. However, even though GMMs can approximate any probability density function if the number of components is large enough, they suffer from limited flexibility in case of a fixed number of components, see [23] and the references therein.…”
Section: Introductionmentioning
confidence: 99%
“…For example, it can mitigate GPU memory exhaustion and reduce training time. This approach has proven successful in various image tasks such as superresolution and denoising [47,5].…”
Section: Kernel Classesmentioning
confidence: 99%
“…Often, a low-rank approximation is made (flat-tail) to reduce the number of estimated parameters. For example, an unknown low-rank covariances model was used in [32,33] to learn an image patch model from a compressed database. This model is then used to perform patch based image denoising.…”
Section: Open Questions For the Extension To Unknown Variable Covaria...mentioning
confidence: 99%