Abstract-We address the image denoising problem, where zero-mean white and homogeneous Gaussian additive noise is to be removed from a given image. The approach taken is based on sparse and redundant representations over trained dictionaries. Using the K-SVD algorithm, we obtain a dictionary that describes the image content effectively. Two training options are considered: using the corrupted image itself, or training on a corpus of high-quality image database. Since the K-SVD is limited in handling small image patches, we extend its deployment to arbitrary image sizes by defining a global image prior that forces sparsity over patches in every location in the image. We show how such Bayesian treatment leads to a simple and effective denoising algorithm. This leads to a state-of-the-art denoising performance, equivalent and sometimes surpassing recently published leading alternative denoising methods.Index Terms-Bayesian reconstruction, dictionary learning, discrete cosine transform (DCT), image denoising, K-SVD, matching pursuit, maximum a posteriori (MAP) estimation, redundancy, sparse representations.
Given a dictionary D ؍ {d ᠪ k} of vectors d ᠪ k, we seek to represent a signal S ᠪ as a linear combination S ᠪ ؍ ͚ k ␥(k)d ᠪ k, with scalar coefficients ␥(k). In particular, we aim for the sparsest representation possible. In general, this requires a combinatorial optimization process. Previous work considered the special case where D is an overcomplete system consisting of exactly two orthobases and has shown that, under a condition of mutual incoherence of the two bases, and assuming that S ᠪ has a sufficiently sparse representation, this representation is unique and can be found by solving a convex optimization problem: specifically, minimizing the ഞ 1 norm of the coefficients ᠪ ␥. In this article, we obtain parallel results in a more general setting, where the dictionary D can arise from two or several bases, frames, or even less structured systems. We sketch three applications: separating linear features from planar ones in 3D data, noncooperative multiuser encoding, and identification of over-complete independent component models.W orkers throughout engineering and the applied sciences frequently want to represent data (signals, images) in the most parsimonious terms. In signal analysis specifically, they often consider models proposing that the signal of interest is sparse in some transform domain, such as the wavelet or Fourier domain (1). However, there is a growing realization that many signals are mixtures of diverse phenomena, and no single transform can be expected to describe them well; instead, we should consider models making sparse combinations of generating elements from several different transforms (1-4). Unfortunately, as soon as we start considering general collections of generating elements, the attempt to find sparse solutions enters mostly uncharted territory, and one expects at best to use plausible heuristic methods (5-8) and certainly to give up hope of rigorous optimality. In this article, we will develop some rigorous results showing that it can be possible to find optimally sparse representations by efficient techniques in certain cases.Suppose we are given a dictionary D of generating ele-L , each one a vector in C N , which we assume normal-The dictionary D can be viewed a matrix of size N ϫ L, with generating elements for columns. We do not suppose any fixed relationship between N and L. In particular, the dictionary can be overcomplete and contain linearly dependent subsets, and in particular need not be a basis. As examples of such dictionaries, we can mention: wavelet packets and cosine packets dictionaries of Coifman et al. (3), which contain L ϭ N log(N) elements, representing transient harmonic phenomena with a variety of durations and locations; wavelet frames, such as the directional wavelet frames of Ron and Shen (9), which contain L ϭ CN elements for various constants C Ͼ 1; and the combined ridgelet͞wavelet systems of Starck, Candès, and Donoho (8,10). Faced with such variety, we cannot call individual elements in the dictionary basis elements; we will use the term...
Abstract-Overcomplete representations are attracting interest in signal processing theory, particularly due to their potential to generate sparse representations of signals. However, in general, the problem of finding sparse representations must be unstable in the presence of noise. This paper establishes the possibility of stable recovery under a combination of sufficient sparsity and favorable structure of the overcomplete system. Considering an ideal underlying signal that has a sufficiently sparse representation, it is assumed that only a noisy version of it can be observed. Assuming further that the overcomplete system is incoherent, it is shown that the optimally sparse approximation to the noisy data differs from the optimally sparse decomposition of the ideal noiseless signal by at most a constant multiple of the noise level. As this optimal-sparsity method requires heavy (combinatorial) computational effort, approximation algorithms are considered. It is shown that similar stability is also available using the basis and the matching pursuit algorithms. Furthermore, it is shown that these methods result in sparse approximation of the noisy data that contains only terms also appearing in the unique sparsest representation of the ideal noiseless sparse signal.
Abstract. A full-rank matrix A ∈ R n×m with n < m generates an underdetermined system of linear equations Ax = b having infinitely many solutions. Suppose we seek the sparsest solution, i.e., the one with the fewest nonzero entries. Can it ever be unique? If so, when? As optimization of sparsity is combinatorial in nature, are there efficient methods for finding the sparsest solution? These questions have been answered positively and constructively in recent years, exposing a wide variety of surprising phenomena, in particular the existence of easily verifiable conditions under which optimally sparse solutions can be found by concrete, effective computational methods. Such theoretical results inspire a bold perspective on some important practical problems in signal and image processing. Several well-known signal and image processing problems can be cast as demanding solutions of undetermined systems of equations. Such problems have previously seemed, to many, intractable, but there is considerable evidence that these problems often have sparse solutions. Hence, advances in finding sparse solutions to underdetermined systems have energized research on such signal and image processing problems-to striking effect. In this paper we review the theoretical results on sparse solutions of linear systems, empirical results on sparse modeling of signals and images, and recent applications in inverse problems and compression in image processing. This work lies at the intersection of signal processing and applied mathematics, and arose initially from the wavelets and harmonic analysis research communities. The aim of this paper is to introduce a few key notions and applications connected to sparsity, targeting newcomers interested in either the mathematical aspects of this area or its applications.
Super-resolution reconstruction produces one or a set of high-resolution images from a set of low-resolution images. In the last two decades, a variety of super-resolution methods have been proposed. These methods are usually very sensitive to their assumed model of data and noise, which limits their utility. This paper reviews some of these methods and addresses their short-comings. We propose an alternate approach using L1 norm minimization and robust regularization based on a bilateral prior to deal with different data and noise models. This computationally inexpensive method is robust to errors in motion and blur estimation and results in images with sharp edges. Simulation results confirm the effectiveness of our method and demonstrate its superiority to other super-resolution methods.
Purpose Parallel imaging allows the reconstruction of images from undersampled multi-coil data. The two main approaches are: SENSE, which explicitly uses coil sensitivities, and GRAPPA, which makes use of learned correlations in k-space. The purpose of this work is to clarify their relationship and to develop and evaluate an improved algorithm Theory and Methods A theoretical analysis shows: 1. The correlations in k-space are encoded in the null space of a calibration matrix. 2. Both approaches restrict the solution to a subspace spanned by the sensitivities. 3. The sensitivities appear as the main eigenvector of a reconstruction operator computed from the null space. The basic assumptions and the quality of the sensitivity maps are evaluated in experimental examples. The appearance of additional eigenvectors motivates an extended SENSE reconstruction with multiple maps, which is compared to existing methods Results The existence of a null space and the high quality of the extracted sensitivities are confirmed. The extended reconstruction combines all advantages of SENSE with robustness to certain errors similar to GRAPPA. Conclusion In this paper the gap between both approaches is finally bridged. A new autocalibration technique combines the benefits of both.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.