Color transformation is the most effective method to improve the mood of an image, because color has a large influence in forming the mood. However, conventional color transformation tools have a tradeoff between the quality of the resultant image and the amount of manual operation. To achieve a more detailed and natural result with less labor, we previously suggested a method that performs an example-based color stylization of images using perceptual color categories. In this paper, we extend this method to make the algorithm more robust and to stylize the colors of video frame sequences. We present a variety of results, arguing that these images and videos convey a different, but coherent mood.
We describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take the characteristics of human color perception into account to generate more appealing results. Our system starts by classifying each pixel value into one of the basic color categories, derived from our psychophysical experiments. The basic color categories are perceptual categories that are universal to everyone, regardless of nationality or cultural background. These categories are used to provide restrictions on color transformations to avoid generating unnatural results. Our system then renders a new image by transferring colors from a reference image to the input image, based on these categorizations. To avoid artifacts due to the explicit clustering, our system defines fuzzy categorization when pseudocontours appear in the resulting image. We present a variety of results and show that our method performs a large, yet natural, color transformation without any sense of incongruity and that the resulting images automatically capture the characteristics of the colors used in the reference image.
Figure 1: Demonstration of our automatic color transformation result. The left image shows an input photograph and the middle image shows a reference image. Our algorithm automatically gets tendencies of color use from the middle image and transfers it to the input image. The resultant image is as shown in the right image.
AbstractWe describe a new computational approach to stylize the colors of an image by using a reference image. During processing, we take characteristics of human color perception into account to generate more appealing results. Our system starts by classifying each pixel value into one of a set of the basic color categories, derived from our psycho-physiological experiments. The basic color categories are perceptual categories that are universal to everyone, regardless of nationality or cultural background. These categories provide restrictions on the color transformations to avoid generating unnatural results. Our system then renders a new image by transferring colors from a reference image to the input image, based on this categorizations. To avoid artifacts due to the explicit clustering, our system defines fuzzy categorization when pseudo-contours appear in the resulting image. We present a variety of results and show that our color transformation performs a large, yet natural color transformation without any sense of incongruity, and that the resulting images automatically capture the characteristics of the color use of the reference image.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.