Abstract-In this paper, we address the problem of recovering a color image from a grayscale one. The input color data comes from a source image considered as a reference image. Reconstructing the missing color of a grayscale pixel is here viewed as the problem of automatically selecting the best color among a set of colors candidates while simultaneously ensuring the local spatial coherency of the reconstructed color information. To solve this problem, we propose a variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously. The contributions of this paper are twofold: first, we introduce a variational formulation modeling the color selection problem under spatial constraints and propose a minimization scheme which allows computing a local minima of the defined non-convex energy. Second, we combine different patch-based features and distances in order to construct a consistent set of possible color candidates. This set is used as input data and our energy minimization allows to automatically select the best color to transfer for each pixel of the grayscale image. Finally, experiments illustrate the potentiality of our simple methodology and show that our results are very competitive with respect to the state-of-the-art methods.
Inpainting is the art of modifying an image in a form that is not detectable by an ordinary observer. There are numerous and very different approaches to tackle the inpainting problem, though as explained in this paper, the most successful algorithms are based upon one or two of the following three basic techniques: copy-and-paste texture synthesis, geometric partial differential equations (PDEs), and coherence among neighboring pixels. We combine these three building blocks in a variational model, and provide a working algorithm for image inpainting trying to approximate the minimum of the proposed energy functional. Our experiments show that the combination of all three terms of the proposed energy works better than taking each term separately, and the results obtained are within the state-of-the-art.
International audienceThis paper provides a new method to colorize gray-scale images. While the computation of the luminance channel is directly performed by a linear transformation, the colorization process is an ill-posed problem that requires some priors. In the literature two classes of approach exist. The first class includes manual methods that need the user to manually add colors on the image to colorize. The second class includes exemplar-based approaches where a color image, with a similar semantic content, is provided as input to the method. These two types of priors have their own advantages and drawbacks. In this paper, a new variational framework for exemplar-based colorization is proposed. A nonlocal approach is used to find relevant color in the source image in order to suggest colors on the gray-scale image. The spatial coherency of the result as well as the final color selection is provided by a nonconvex variational framework based on a total variation. An efficient primal-dual algorithm is provided, and a proof of its convergence is proposed. In this work, we also extend the proposed exemplar-based approach to combine both exemplar-based and manual methods. It provides a single framework that unifies advantages of both approaches. Finally, experiments and comparisons with state-of-the-art methods illustrate the efficiency of our proposal. 1. Introduction. The colorization of a gray-scale image consists of adding color information to it. It is useful in the entertainment industry to make old productions more attractive. The reverse operation is based on perceptual assumptions and is today an active research area [28], [13], [37]. Colorization can also be used to add information in order to help further analysis of the image by a user (e.g., sensor fusion [43]). It can also be used for art restoration ; see, e.g., [17] or [41]. It is an old subject that began with the ability of screens and devices to display colors. A seminal approach consists in mapping each level of gray into a color-space [18]. Nevertheless, all colors cannot be recovered without an additional prior. In the existing approaches, priors can be added in two ways: with a direct addition of color o
Superpixels have become very popular in many computer vision applications. Nevertheless, they remain under-exploited, since the superpixel decomposition may produce irregular and nonstable segmentation results due to the dependency to the image content. In this paper, we first introduce a novel structure, a superpixel-based patch, called SuperPatch. The proposed structure, based on superpixel neighborhood, leads to a robust descriptor, since spatial information is naturally included. The generalization of the PatchMatch method to SuperPatches, named SuperPatchMatch, is introduced. Finally, we propose a framework to perform fast segmentation and labeling from an image database, and demonstrate the potential of our approach, since we outperform, in terms of computational cost and accuracy, the results of state-of-the-art methods on both face labeling and medical image segmentation.
International audienceIn this paper, we address the difficult task of detecting and segmenting foreground moving objects in complex scenes. The sequences we consider exhibit highly dynamic backgrounds, illumination changes and low contrasts, and can have been shot by a moving camera. Three main steps compose the proposed method. First, a set of moving points is selected within a sub-grid of image pixels. A multi-cue descriptor is associated to each of these points. Clusters of points are then formed using a variable bandwidth mean shift technique with automatic bandwidth selection. Finally, segmentation of the object associated to a given cluster is performed using graph cuts. Experiments and comparisons to other motion detection methods on challenging sequences demonstrate the performance of the proposed method for video analysis in complex scenes
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.