For believable character animation, skin deformation should communicate important deformation effects due to underlying muscle movement. Anatomical models that capture these effects are typically constructed from the inside out. Internal tissue is modeled by hand and a surface skin is attached to, or generated from, the internal structure. This paper presents an outside-in approach to anatomical modeling, in which we generate musculature from a predefined structure, which we conform to an artist-sculpted skin surface. Motivated by interactive applications, we attach the musculature to an existing control skeleton and apply a novel geometric deformation model to deform the skin surface to capture important muscle motion effects. Musculoskeletal structure can be stored as a template and applied to new character models. We illustrate the methodology, as integrated into a commercial character animation system, with examples driven by both keyframe animation and recorded motion data.
This paper proposes a layered strategy for controlling character motion in a dynamically varying environment. We illustrate this approach in the context of a physically simulated human swimmer. The swimmer attempts to follow a dynamic target by augmenting cyclic stroke control with a set of pre-specified variations, based on the current state of the character and its environment. Control of a given swim stroke is decomposed into three layers: a basic stroke sequence, a set of per-stroke control variations, and a set of continuously applied control variations. Interactive control of the swimmer is possible as a result of an efficient physical simulation using a simplified fluid model. Our results show layered dynamic control to be an effective adaptive control technique in well conditioned physical simulations such as swimming, where simulation states resulting from control errors are recoverable.
Interactive control of a physically simulated character is a challenging problem, due both to the complexity of controlling multiple degrees of freedom with lower dimensional input and because many interesting motions lie on the fringes of character stability. This paper addresses these problems using a novel technique called predictive feedback, where a glimpse into the near future for a few sample inputs is continuously presented to the animator. We discuss issues related to the spatio-temporal distribution of predictions so that they provide meaningful and timely feedback to an animator interactively controlling a physics-based character with simple input devices, like a mouse or keyboard. We propose a visual presentation of this predictive feedback in which control input samples are chosen in the proximity of the user's current input and the predicted results are co-located with the position of the input necessary to achieve them. We further show how the predictive samples may be automatically interpolated to control aspects of the character's motion, such as balance, thereby freeing the animator to focus on other details. The paper thus contributes a technique for physically simulated characters that simplifies interactive character control and increases the range of motion that can be performed by both novices and experts. Many of the presented concepts extend beyond our specific input device and dynamic character control setting to more general input tasks.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.