Human motion is difficult to create and manipulate because of the high dimensionality and spatiotemporal nature of human motion data. Recently, the use of large collections of captured motion data has added increased realism in character animation. In order to make the synthesis and analysis of motion data tractable, we present a low-dimensional motion space in which high-dimensional human motion can be effectively visualized, synthesized, edited, parameterized, and interpolated in both spatial and temporal domains. Our system allows users to create and edit the motion of animated characters in several ways: The user can sketch and edit a curve on low-dimensional motion space, directly manipulate the character's pose in three-dimensional object space, or specify key poses to create in-between motions. IntroductionCreating animated characters that move realistically is an important problem in computer graphics. One appealing approach is to collect a large amount of human motion data and analyze those data for constructing a behavior model of animated characters. We expect that this model not only contains many primitive actions available to the characters but also is able to provide assorted variants of each primitive action in a parametric form. It is also expected that a set of actions are provided in a connected way so that transitions from one action to another are allowed. Constructing such a behavior model is quite difficult especially if we want to make use of relatively unstructured motion data for behavior generation.With a relatively small collection of motion data, it is possible to look through the entire motion set thoroughly to manually construct a connected set of character behaviors. Through the manual process, one can have an in-depth understanding of what behaviors are available at any given situation and how behaviors are organized. However, this is unrealistic in practical applications because the input motion set must be large *Correspondence to: Hyun Joon Shin, Division of Digital Media, Ajou University, San 5, Woncheon-dong, Yungtong-Ku, Suwon, Korea. E-mail: joony@ajou.ac.kr Contract/grant sponsors: MIC, Korea; Young investigator's award grant from the KOSEF. enough to accommodate a rich variety of natural human motion. Recently developed methods are able to analyze a large motion set automatically to identify a connected set of distinctive behaviors and even parameterize variants of each behavior. The resulting data structure can be searched in order to create a sequence of motions that allows animated characters to track a target or travel along a sketched path. However, this data structure may be too complex for animators to understand how behaviors are organized. From the animator's point of view, it is very important to have direct and immediate control over the character's motion. Animators want to be able to select a set of appropriate behaviors at a given situation and to produce a carefully crafted motion of the character interactively by precisely adjusting parameters of the beha...
The perception of objects, depth, and distance has been repeatedly shown to be divergent between virtual and physical environments. We hypothesize that many of these discrepancies stem from incorrect geometric viewing parameters, specifically that physical measurements of eye position are insufficiently precise to provide proper viewing parameters. In this paper, we introduce a perceptual calibration procedure derived from geometric models. While most research has used geometric models to predict perceptual errors, we instead use these models inversely to determine perceptually correct viewing parameters. We study the advantages of these new psychophysically determined viewing parameters compared to the commonly used measured viewing parameters in an experiment with 20 subjects. The perceptually calibrated viewing parameters for the subjects generally produced new virtual eye positions that were wider and deeper than standard practices would estimate. Our study shows that perceptually calibrated viewing parameters can significantly improve depth acuity, distance estimation, and the perception of shape.
We propose a new image-space technique to summarize a 3D animation sequence into a single image by using the depth information of the animation. The proposed method extracts important frames from the animation sequence, where the important frames are representative of the sequence and keep the complexity of a composed image as simple as possible. Assuming that the input sequence consists of a set of images with depth information, we construct a composite depth image and its gradient image. We evaluate the importance of each frame by its amount of contribution to the gradient of the composite depth image. The frames of higher importance are located more likely at which the motion of a moving object reaches the extreme positions, the fastest speed, and the slowest speed in image space. From the most important frames to the least ones, we recursively compose the important frames into a single composite image while keeping the complexity of the composite image by evaluating the amount of self-overlap. The threshold value for the amount of overlap allows a user to control interactively the visual complexity of the composed image.
In this paper, we introduce a novel framework that allows users to synthesize the expression of a 3D character by providing a intuitive set of parametric controls. Assuming that human face movements are formed by a set of basis actuation, we analyze a set of real expressions to extract this set together with skin deformation due to the actuation of face. To do this, we first decompose the movement of each marker into a set of distinctive movements. Independent component analysis technique is then adopted to find a independent set of actuations. Our simple and efficient skin deformation model are learned to reproduce the realistic movements of facial parts due to the actuations. In this framework, users can animate characters' faces by controlling the amount actuation or by directly manipulating the face geometry. In addition, the proposed method can be applied to expression transfer which reproduces one character's expression in another character's face. Experimental results demonstrate that our method can produce realistic expression efficiently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.