Abstract-In this paper, we address the problem of recovering a color image from a grayscale one. The input color data comes from a source image considered as a reference image. Reconstructing the missing color of a grayscale pixel is here viewed as the problem of automatically selecting the best color among a set of colors candidates while simultaneously ensuring the local spatial coherency of the reconstructed color information. To solve this problem, we propose a variational approach where a specific energy is designed to model the color selection and the spatial constraint problems simultaneously. The contributions of this paper are twofold: first, we introduce a variational formulation modeling the color selection problem under spatial constraints and propose a minimization scheme which allows computing a local minima of the defined non-convex energy. Second, we combine different patch-based features and distances in order to construct a consistent set of possible color candidates. This set is used as input data and our energy minimization allows to automatically select the best color to transfer for each pixel of the grayscale image. Finally, experiments illustrate the potentiality of our simple methodology and show that our results are very competitive with respect to the state-of-the-art methods.
The human cerebellum is involved in language, motor tasks and cognitive processes such as attention or emotional processing. Therefore, an automatic and accurate segmentation method is highly desirable to measure and understand the cerebellum role in normal and pathological brain development. In this work, we propose a patch-based multi-atlas segmentation tool called CERES (CEREbellum Segmentation) that is able to automatically parcellate the cerebellum lobules. The proposed method works with standard resolution magnetic resonance T1-weighted images and uses the Optimized PatchMatch algorithm to speed up the patch matching process. The proposed method was compared with related recent state-of-the-art methods showing competitive results in both accuracy (average DICE of 0.7729) and execution time (around 5 minutes).
International audienceThis paper provides a new method to colorize gray-scale images. While the computation of the luminance channel is directly performed by a linear transformation, the colorization process is an ill-posed problem that requires some priors. In the literature two classes of approach exist. The first class includes manual methods that need the user to manually add colors on the image to colorize. The second class includes exemplar-based approaches where a color image, with a similar semantic content, is provided as input to the method. These two types of priors have their own advantages and drawbacks. In this paper, a new variational framework for exemplar-based colorization is proposed. A nonlocal approach is used to find relevant color in the source image in order to suggest colors on the gray-scale image. The spatial coherency of the result as well as the final color selection is provided by a nonconvex variational framework based on a total variation. An efficient primal-dual algorithm is provided, and a proof of its convergence is proposed. In this work, we also extend the proposed exemplar-based approach to combine both exemplar-based and manual methods. It provides a single framework that unifies advantages of both approaches. Finally, experiments and comparisons with state-of-the-art methods illustrate the efficiency of our proposal. 1. Introduction. The colorization of a gray-scale image consists of adding color information to it. It is useful in the entertainment industry to make old productions more attractive. The reverse operation is based on perceptual assumptions and is today an active research area [28], [13], [37]. Colorization can also be used to add information in order to help further analysis of the image by a user (e.g., sensor fusion [43]). It can also be used for art restoration ; see, e.g., [17] or [41]. It is an old subject that began with the ability of screens and devices to display colors. A seminal approach consists in mapping each level of gray into a color-space [18]. Nevertheless, all colors cannot be recovered without an additional prior. In the existing approaches, priors can be added in two ways: with a direct addition of color o
Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonance Images (MRI). Recently, patch-based label fusion approaches have demonstrated state-of-the-art segmentation accuracy. In this paper, we introduce a new patch-based label fusion framework to perform segmentation of anatomical structures. The proposed approach uses an Optimized PAtchMatch Label fusion (OPAL) strategy that drastically reduces the computation time required for the search of similar patches. The reduced computation time of OPAL opens the way for new strategies and facilitates processing on large databases. In this paper, we investigate new perspectives offered by OPAL, by introducing a new multi-scale and multi-feature framework. During our validation on hippocampus segmentation we use two datasets: young * email: remi.giraud@labri.fr, tel: +33540006937, fax: +33540006669 * Data used in preparation of this article were obtained from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database (adni.loni.usc.edu). As such, the investigators within the ADNI contributed to the design and implementation of ADNI and/or provided data but did not participate in analysis or writing of this report. A complete listing of ADNI investigators can be found at: http://adni.loni.use.edu/wp-content/uploads/how_to_apply/ADNI_ Acknowledgement_List.pdf.
Automatic segmentation methods are important tools for quantitative analysis of Magnetic Resonance Images. Recently, patchbased label fusion approaches demonstrated state-of-the-art segmentation accuracy. In this paper, we introduce a new patch-based method using the PatchMatch algorithm to perform segmentation of anatomical structures. Based on an Optimized PAtchMatch Label fusion (OPAL) strategy, the proposed method provides competitive segmentation accuracy in near real time. During our validation on hippocampus segmentation of 80 healthy subjects, OPAL was compared to several state-of-theart methods. Results show that OPAL obtained the highest median Dice coefficient (89.3%) in less than 1 sec per subject. These results highlight the excellent performance of OPAL in terms of computation time and segmentation accuracy compared to recently published methods.
Superpixels have become very popular in many computer vision applications. Nevertheless, they remain under-exploited, since the superpixel decomposition may produce irregular and nonstable segmentation results due to the dependency to the image content. In this paper, we first introduce a novel structure, a superpixel-based patch, called SuperPatch. The proposed structure, based on superpixel neighborhood, leads to a robust descriptor, since spatial information is naturally included. The generalization of the PatchMatch method to SuperPatches, named SuperPatchMatch, is introduced. Finally, we propose a framework to perform fast segmentation and labeling from an image database, and demonstrate the potential of our approach, since we outperform, in terms of computational cost and accuracy, the results of state-of-the-art methods on both face labeling and medical image segmentation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.