Good character animation requires convincing skin deformations including subtleties and details like muscle bulges. Such effects are typically created in commercial animation packages which provide very general and powerful tools. While these systems are convenient and flexible for artists, the generality often leads to characters that are slow to compute or that require a substantial amount of memory and thus cannot be used in interactive systems. Instead, interactive systems restrict artists to a specific character deformation model which is fast and memory efficient but is notoriously difficult to author and can suffer from many deformation artifacts. This paper presents an automated framework that allows character artists to use the full complement of tools in high-end systems to create characters for interactive systems. Our method starts with an arbitrarily rigged character in an animation system. A set of examples is exported, consisting of skeleton configurations paired with the deformed geometry as static meshes. Using these examples, we fit the parameters of a deformation model that best approximates the original data yet remains fast to compute and compact in memory.
We describe a technique for generating cartoon style animations of smoke. Our method takes the output of a physically-based simulator and uses it to drive particles that are rendered using a variant of the depth differences technique (originally used for rendering trees). Specific issues we address include the placement and evolution of primitives in the flow and the maintenance of temporal coherence. The results are visually simple, flicker-free animations that convey the turbulent, dynamic nature of the gas with simple outlines.
Geometry deformations for interactive animated characters are most commonly achieved using a skeleton-driven deformation technique called linear blend skinning. To deform a vertex, linear blend skinning computes a weighted average of that vertex rigidly transformed by each bone that influences it. Authoring a character for linear blend skinning involves explicitly setting the weights used to compute deformed vertex positions. This process is tedious, repetitive, and frustrating not only because the deformed vertex positions are not intuitively related to the vertex weights, but also because the range of possible deformations is unclear. In this paper, we present a method that lets users directly manipulate the deformed vertex positions in a linear blend skin. We compute the subspace of possible deformed vertex positions, display it for users, and let them place the vertex anywhere in this space. Our algorithm then computes the correct weights automatically. This method lets us provide a skin editing interface that gives users as much direct control as possible and makes explicit what deformations are possible.
In this paper, we show how many interactive 3D applications' visual styles can be changed to new, different, and interesting visual styles non-invasively. Our method lets a single stylized renderer be used with many applications. We implement this by intercepting the OpenGL graphics library and changing the drawing calls. Even though OpenGL only receives low-level information from an application, computation on this data and assumptions about the application can give us enough information to develop stylized renderers.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.