We propose a model to reconstruct wavelet coefficients using a total variation minimization algorithm. The approach is motivated by wavelet signal denoising methods, where thresholding small wavelet coefficients leads to pseudo-Gibbs artifacts. By replacing these thresholded coefficients by values minimizing the total variation, our method performs a nearly artifact-free signal denoising. In this paper, we detail the algorithm based on a subgradient descent combining a projection on a linear space. The convergence of the algorithm is established and numerical experiments are reported.
Recent years have seen the development of signal denoising algorithms based on wavelet transform. It has been shown that thresholding the wavelet coefficients of a noisy signal allows to restore the smoothness of the original signal. However, wavelet denoising suffers of a main drawback : around discontinuities the reconstructed signal is smoothed, exhibiting pseudo-Gibbs phenomenon. We consider the problem of denoising piecewise smooth signals with sharp discontinuities. We propose to apply a traditional wavelet denoising method and to restore the denoised signal using a total variation minimization approach. This second step allows to remove the Gibbs phenomena and therefore to restore sharp discontinuities, while the other structures are preserved. The main innovation of our algorithm is to constrain the total variation minimization by the knowledge of the remaining wavelet coefficients. In this way, we make sure that the restoration process does not deteriorate the information that has been considered as significant in the denoising step. With this approach we substantially improve the performance of classical wavelet denoising algorithms, both in terms of SNR and in terms of visual artifacts.
This article proposes a fast and open-source implementation of the well-known Non-Local Means (NLM) denoising algorithm, in its original pixelwise formulation. The fast implementation is based on the computation of patch distances using sums of lines that are invariant under a patch shift. The optimal parameters of NLM (in the average peak signal to noise ratio -PSNR -sense) are computed from an image database, thereby leading to a parameter-free NLM implementation. Comparison is performed with the parameter-free blockwise NLM implementation already proposed in IPOL journal by Buades, Coll and Morel. As expected the blockwise implementation offers better PSNR, at least when the noise standard deviation is large enough, but there is no significant difference in quality when performing visual inspection. The highlight is that the proposed parameter-free pixelwise NLM implementation is faster than the patchwise one by a factor of 6 to 49.
Source CodeThe reviewed source code and documentation for the parameter-free fast pixelwise NLM algorithm are available from the web page of this article 1 . Compilation and usage instructions are included in the README.txt file of the archive.
Coding systems based on block DCT, such as JPEG standard, are known to produce blocking and Gibbs effects. We propose a method to remove these artifacts without smoothing images and without loosing there perceptual features. It consists in an weighted Total Variation minimization constrained by the knowledge of quantization intervals. A fast algorithm is proposed and experiments suggest better performance than state-of-the-art deblocking algorithms.
Patch-based sparse representation and low-rank approximation for image processing attract much attention in recent years. The minimization of the matrix rank coupled with the Frobenius norm data fidelity can be solved by the hard thresholding filter with principle component analysis (PCA) or singular value decomposition (SVD). Based on this idea, we propose a patch-based low-rank minimization method for image denoising, which learns compact dictionaries from similar patches with PCA or SVD, and applies simple hard thresholding filters to shrink the representation coefficients. Compared to recent patchbased sparse representation methods, experiments demonstrate that the proposed method is not only rather rapid, but also effective for a variety of natural images, especially for texture parts in images.
Abstract. This paper describes a compact perceptual image model intended for morphological representation of the visual information contained in natural images. We explain why the total variation can be a criterion to split the information between the two main visual structures, which are the sketch and the microtextures. We deduce a morphological decomposition scheme, based on a segmentation where the borders of the regions correspond to the location of the topological singularities of a topographic map. This leads to propose a new and morphological definition of edges. The sketch is computed by approximating the image with a piecewise smooth non-oscillating function, using a Lipshitz interpolant given as the solution of a PDE. The data needed to reconstruct the sketch image are very compact, so that an immediate outcome of this image model is the design of a progressive, and artifact-free, image compression scheme.Résumé. Cet article décrit un modèle perceptuel et compact, destinéà donner une représentation morphologique de l'information visuelle contenue dans les images naturelles. Nous expliquons pourquoi la variation totale peutêtre un critère pour séparer l'information entre les deux structures essentielles, qui sont le sketch et les micro-textures. Nous en déduisons un procédé de décomposition morphologique, basé sur une segmentation où les bords des régions correspondentà la position des singularités topologiques de la carte topographique. Cela nous permet de proposer une définition des bords, nouvelle et morphologique. Le sketch est calculé en approchant l'image par une fonction nonoscillante et régulière par morceaux, en utilisant une interpolation Lipschitz donnée comme la solution d'une EDP. Les données nécessairesà la reconstruction de l'image sketchétant très compactes, une application immédiate de ce modèle est la mise au point d'un procédé de compression progressive des images sans déformation visuelle.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.