F or a long time, line drawings have been a part of artistic expression (for example, any pencil or pen-and-ink drawing), scientific illustrations (medical or technical), or entertainment graphics (such as in comics). Hence, computer graphics researchers have extensively studied the automatic generation of such lines. In particular, the area of nonphotorealistic rendering has focused on two main directions of research in this respect: the generation of hatching that conveys illumination as well as texture in an image and the computation of outlines and silhouettes. Silhouettes play an important role in shape recognition because they provide one of the main cues for figure-to-ground distinction. However, since silhouettes are view dependent, they need to be determined for every frame of an animation. Finding an efficient way to accomplish this is nontrivial. Indeed, a variety of different algorithms exist that compute silhouettes for geometric objects. This article provides a guideline for developers who need to choose between one of these algorithms for his or her application. Here, we restrict ourselves to discussing only those algorithms that apply to polygonal models, because these are the most commonly used object representations in modern computer graphics. (For an algorithm to compute the silhouette for free-form surfaces see, for example, Elber and Cohen. 1) Thus, we can use all algorithms discussed here to take a polygonal mesh as input and compute the visible part of the silhouette as output. Some algorithms, however, can also help compute the silhouette only, without additional visibility culling. The silhouette's representation might vary depending on the algorithm class-that is, the silhouette might take the form of a pixel matrix or a set of analytic stroke descriptions.
The term stroke-based rendering collectively describes techniques where images are generated from elements that are usually larger than a pixel. These techniques lend themselves well for rendering artistic styles such as stippling and hatching. This paper presents a novel approach for stroke-based rendering that exploits multi-agent systems.
RenderBots are individual agents each of which in general represents one stroke. They form a multi-agent system and undergo a simulation to distribute themselves in the environment. The environment consists of a source image and possibly additional G-buffers. The final image is created when the simulation is finished by having eachRenderBot execute its painting function. RenderBot classes differ in their physical behavior as well as their way of painting so that different styles can be created in a very flexible way.
Hatching lines are often used in line illustrations to convey tone and texture of a surface. In this paper we present methods to generate hatching lines from polygonal meshes and render them in high quality either at interactive rates for on-screen display or for reproduction in print. Our approach is based on local curvature information that is integrated to form streamlines on the surface of the mesh. We use a new algorithm that provides an even distribution of these lines. A special processing of these streamlines ensures high quality line rendering for both intended output media later on. While the streamlines are generated in a preprocessing stage, hatching lines are rendered either for vector-based printer output or on-screen display, the latter allowing for interaction in terms of changing the view parameters or manipulating the entire line shading model at run-time using a virtual machine.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.