Even after over two decades, the total variation (TV) remains one of the most popular regularizations for image processing problems and has sparked a tremendous amount of research, particularly to move from scalar to vector-valued functions. In this paper, we consider the gradient of a color image as a three dimensional matrix or tensor with dimensions corresponding to the spatial extend, the differences to other pixels, and the spectral channels. The smoothness of this tensor is then measured by taking different norms along the different dimensions. Depending on the type of these norms one obtains very different properties of the regularization, leading to novel models for color images. We call this class of regularizations collaborative total variation (CTV). On the theoretical side, we characterize the dual norm, the subdifferential and the proximal mapping of the proposed regularizers. We further prove, with the help of the generalized concept of singular vectors, that an ∞ channel coupling makes the most prior assumptions and has the greatest potential to reduce color artifacts. Our practical contributions consist of an extensive experimental section where we compare the performance of a large number of collaborative TV methods for inverse problems like denoising, deblurring and inpainting.
Most common cameras use a CCD sensor device measuring a single color per pixel. The other two color values of each pixel must be interpolated from the neighboring pixels in the so-called demosaicking process. State-of-the-art demosaicking algorithms take advantage of inter-channel correlation locally selecting the best interpolation direction. These methods give impressive results except when local geometry cannot be inferred from neighboring pixels or channel correlation is low. In these cases, they create interpolation artifacts. We introduce a new algorithm involving non-local image self-similarity in order to reduce interpolation artifacts when local geometry is ambiguous. The proposed algorithm introduces a clear and intuitive manner of balancing how much channel-correlation must be taken advantage of. Comparison shows that the proposed algorithm gives state-of-the-art methods in several image bases.
Pansharpening refers to the fusion process of inferring a high-resolution multispectral image from a high-resolution panchromatic image and a low-resolution multispectral one. In this paper we propose a new variational method for pansharpening which incorporates a nonlocal regularization term and two fidelity terms, one describing the relation between the panchromatic image and the highresolution spectral channels and the other one preserving the colors from the low-resolution modality. The nonlocal term is based on the image self-similarity principle applied to the panchromatic image. The existence and uniqueness of minimizer for the described functional is proved in a suitable space of weighted integrable functions. Although quite successful in terms of relative error, state-of-theart pansharpening methods introduce relevant color artifacts. These spectral distortions can be significantly reduced by involving the image self-similarity. Extensive comparisons with state-ofthe-art algorithms are performed. 1. Introduction. Many earth resource satellites, such as IKONOS, Landsat, QuickBird, and SPOT, provide continuously growing quantities of remote sensing images useful for a wide range of both scientific and everyday tasks. For example, satellite images are used to improve visual photointerpretation [54], digital-surface model extraction [45], and texture analysis [48]. Further applications are feature detection [24], land cover classification [33], estimating water depth [38], soil moisture content [43], vegetation mapping [21], and many military tasks such as mission planning, navigation, and targeting.Digital color images are usually represented by three color values at each pixel. Nevertheless, most common cameras use a CCD sensor device measuring a single color per pixel. The other two color values of each pixel must be interpolated from the neighboring pixels in the so-called demosaicking process. The selected configuration of the CCD sensor usually follows the CFA Bayer, where, out of a group of four pixels, two are green, one is red, and one is blue. Most satellites use a different acquisition system that decouples the acquisition of a panchromatic image at high spatial resolution from the acquisition of a multispectral
Most satellites decouple the acquisition of a panchromatic image at high spatial resolution from the acquisition of a multispectral image at lower spatial resolution. Pansharpening is a fusion technique used to increase the spatial resolution of the multispectral data while simultaneously preserving its spectral information. In this paper, we consider pansharpening as an optimization problem minimizing a cost function with a nonlocal regularization term. The energy functional which is to be minimized decouples for each band, thus permitting the application to misregistered spectral components. This requirement is achieved by dropping the, commonly used, assumption that relates the spectral and panchromatic modalities by a linear transformation. Instead, a new constraint that preserves the radiometric ratio between the panchromatic and each spectral component is introduced. An exhaustive performance comparison of the proposed fusion method with several classical and state-of-the-art pansharpening techniques illustrates its superiority in preserving spatial details, reducing color distortions, and avoiding the creation of aliasing artifacts.
Denoising is the problem of removing the inherent noise from an image. The standard noise model is additive white Gaussian noise, where the observed image f is related to the underlying true image u by the degradation model f = u + η, and η is supposed to be at each pixel independently and identically distributed as a zero-mean Gaussian random variable. Since this is an ill-posed problem, Rudin, Osher and Fatemi introduced the total variation as a regularizing term. It has proved to be quite efficient for regularizing images without smoothing the boundaries of the objects. This paper focuses on the simple description of the theory and on the implementation of Chambolle's projection algorithm for minimizing the total variation of a grayscale image. Furthermore, we adapt the algorithm to the vectorial total variation for color images. The implementation is described in detail and its parameters are analyzed and varied to come up with a reliable implementation. Source CodeANSI C source code to produce the same results as the demo is accessible at the IPOL web page of this article 1 .
Most common cameras use a CCD sensor device measuring a single color per pixel. Demosaicking is the interpolation process by which one can infer a full color image from such a matrix of values, thus interpolating the two missing components per pixel. Most demosaicking methods take advantage of inter-channel correlation locally selecting the best interpolation direction. The obtained results look convincing except when local geometry cannot be inferred from neighboring pixels or channel correlation is low. In these cases, these algorithms create interpolation artifacts such as zipper effect or color aliasing. This paper discusses the implementation details of the algorithm proposed in [J. Duran, A. Buades, "Self-Similarity and Spectral Correlation Adaptive Algorithm for Color Demosaicking", IEEE Transactions on Image Processing, 23(9), pp. 4031-4040, 2014]. The proposed method involves nonlocal image self-similarity in order to reduce interpolation artifacts when local geometry is ambiguous. It further introduces a clear and intuitive manner of balancing how much channel-correlation must be taken advantage of. Source Code An ANSI C source code implementation of the described algorithms is accessible at the IPOL web page of this article 1 , together with an on-line demo.
We propose a patch-based method for the simultaneous denoising and fusion of a sequence of RAW multi-exposed images. A spatio-temporal criterion is used to select similar patches along the sequence, and a weighted principal component analysis permits to both denoise and fuse the multi exposed data. The overall strategy permits to denoise and fuse the set of images without the need of recovering each denoised image in the multi-exposure set, leading to a very efficient procedure. Several experiments show that the proposed method permits to obtain state-of-the-art fusion results with real RAW data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.