This paper presents a quick and simple method for converting complex images and video to perceptually accurate greyscale versions. We use a two‐step approach first to globally assign grey values and determine colour ordering, then second, to locally enhance the greyscale to reproduce the original contrast. Our global mapping is image independent and incorporates the Helmholtz‐Kohlrausch colour appearance effect for predicting differences between isoluminant colours. Our multiscale local contrast enhancement reintroduces lost discontinuities only in regions that insufficiently represent original chromatic contrast. All operations are restricted so that they preserve the overall image appearance, lightness range and differences, colour ordering, and spatial details, resulting in perceptually accurate achromatic reproductions of the colour original.
This paper presents an interactive watercolor rendering technique that recreates the specific visual effects of lavis watercolor. Our method allows the user to easily process images and 3d models and is organized in two steps: an abstraction step that recreates the uniform color regions of watercolor and an effect step that filters the resulting abstracted image to obtain watercolor-like images. In the case of 3d environments we also propose two methods to produce temporally coherent animations that keep a uniform pigment repartition while avoiding the shower door effect.
Figure 1: Given a reference arrangement composed of vector elements (top left), our analysis scheme divides the raw element set into appearance categories (bottom left). Spatial interactions based on appearance can be learned by statistical modeling and exploited to yield visually similar arrangements (right). AbstractWe present a technique for the analysis and re-synthesis of 2D arrangements of stroke-based vector elements. The capture of an artist's style by the sole posterior analysis of his/her achieved drawing poses a formidable challenge. Such by-example techniques could become one of the most intuitive tools for users to alleviate creation process efforts. Here, we propose to tackle this issue from a statistical point of view and take specific care of accounting for information usually overlooked in previous research, namely the elements' very appearance. Composed of curve-like strokes, we describe elements by a concise set of perceptually relevant features. After detecting appearance dominant traits, we can generate new arrangements that respect the captured appearance-related spatial statistics using multitype point processes. Our method faithfully reproduces visually similar arrangements and relies on neither heuristics nor post-processes to ensure statistical correctness.
Visualization of very complex scenes can be significantly accelerated using occlusion culling. In this paper we present a visibility preprocessing method which efficiently computes potentially visible geometry for volumetric viewing cells. We introduce novel extended projection operators, which permits efficient and conservative occlusion culling with respect to all viewpoints within a cell, and takes into account the combined occlusion effect of multiple occluders. We use extended projection of occluders onto a set of projection planes to create extended occlusion maps; we show how to efficiently test occludees against these occlusion maps to determine occlusion with respect to the entire cell. We also present an improved projection operator for certain specific but important configurations. An important advantage of our approach is that we can re-project extended projections onto a series of projection planes (via an occlusion sweep), and accumulate occlusion information from multiple blockers. This new approach allows the creation of effective occlusion maps for previously hard-to-treat scenes such as leaves of trees in a forest. Graphics hardware is used to accelerate both the extended projection and reprojection operations. We present a complete implementation demonstrating significant speedup with respect to view-frustum culling only, without the computational overhead of on-line occlusion culling.
Scene di usion strength di usion controlFigure 1: Diffusion Curves allow us to draw vectorial images with a rich set of color gradients (left). It is based on a diffusion process that propagates color information from curves in the scene. While the colors could be chosen arbitrarily, the diffusion itself is not controllable by the user. Our work introduces ways to alter diffusion behavior. This allows us to reduce the number of color definitions for an equivalent output (middle), to control the diffusion strength of certain colors (right, floor), or even influence diffusion directions (right, cushion).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.