Data driven models of human poses and soft-tissue deformations can produce very realistic results, but they only model the visible surface of the human body and cannot create skin deformation due to interactions with the environment. Physical simulations can generalize to external forces, but their parameters are difficult to control. In this paper, we present a layered volumetric human body model learned from data. Our model is composed of a data-driven inner layer and a physics-based external layer. The inner layer is driven with a volumetric statistical body model (VSMPL). The soft tissue layer consists of a tetrahedral mesh that is driven using the finite element method (FEM). Model parameters, namely the segmentation of the body into layers and the soft tissue elasticity, are learned directly from 4D registrations of humans exhibiting soft tissue deformations. The learned two layer model is a realistic full-body avatar that generalizes to novel motions and external forces. Experiments show that the resulting avatars produce realistic results on held out sequences and react to external forces. Moreover, the model supports the retargeting of physical properties from one avatar when they share the same topology.
The acquisition of highly detailed static 3D scan data for people in clothing is becoming widely available. Since 3D scan data is given as a single mesh without semantic separation, in order to animate the data, it is necessary to model shape and deformation behaviour of individual body and garment parts. This paper presents a new method for generating simulation-ready garment models from 3D static scan data of clothed humans. A key contribution of our method is a novel approach to segmenting garments by finding optimal boundaries between the skin and garment. Our boundary-based garment segmentation method allows for stable and smooth separation of garments by using an implicit representation of the boundary and its optimization strategy. In addition, we present a novel framework to construct a 2D pattern from the segmented garment and place it around the body for a draping simulation. The effectiveness of our method is validated by generating garment patterns for a number of scan data.
In augmented reality (AR) applications, a virtual avatar serves as a useful medium to represent a human in a different place. This paper deals with the problem of retargeting a human motion to an avatar. In particular, we present a novel method that retargets a human motion with respect to an object to that of an avatar with respect to a different object of a similar shape. To achieve this, we developed a spatial map that defines the correspondences between any points in the 3D spaces around the respective objects. The key advantage of the spatial map is that it identifies the desired locations of the avatar's body parts for any input motion of a human. Once the spatial map is created offline, the motion retargeting can be performed in real-time. The retargeted motion preserves important features of the original motion such as the human pose and the spatial relation with the object. We report the results of a number of experiments that demonstrate the effectiveness of the proposed method.
Despite the recent advances in automatic methods for computing skinning weights, manual intervention is still indispensable to produce high-quality character deformation. However, current modeling software does not provide efficient tools for the manual definition of skinning weights. Widely used paint-based interfaces give users high degrees of freedom, but at the expense of significant efforts and time. This article presents a novel interface for editing skinning weights based on splines, which represent the isolines of skinning weights on a mesh. When a user drags a small number of spline anchor points, our method updates the shape of the isolines and smoothly interpolates or propagates the weights while respecting the given iso-value on the spline. We introduce several techniques to enable the interface to run in real-time and propose a particular combination of functions that generates appropriate skinning weight over the surface. Users can create skinning weights from scratch by using our method. In addition, we present the spline and the gradient fitting methods that closely approximate initial given weights, so that a user can modify the weights with our spline interface. We show the effectiveness of our spline-based interface through a number of test cases.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.