In this paper, we introduce a two-layered approach addressing the problem of creating believable mesh-based skin deformation. For each frame, the skin is first deformed with a classic linear blend skinning approach, which usually leads to unsightly artefacts like the well-known candy-wrapper effect and volume loss. Then we enforce some geometric constraints which displace the positions of the vertices to mimic the behavior of the skin and achieve effects like volume preservation and jiggling. We allow the artist to control the amount of jiggling and the area of the skin affected by it. The geometric constraints are solved using a Position-Based Dynamics schema. We employ a graph coloring algorithm for parallelizing the computation of the constraints. Being based on PositionBased Dynamics guarantees efficiency and real-time performances while enduring robustness and unconditional stability. We demonstrate the visual quality and the performance of our approach with a variety of skeleton-driven soft body characters.
In this paper, we present a physically based model for real-time simulation of thread dynamics. Our model captures all the relevant aspects of the physics of the thread, including quasi-zero elasticity, bending, torsion and self-collision, and it provides output forces for the haptic feedback. The physical properties are modeled in terms of constraints that are iteratively satisfied while the numerical integration is carried out through a Verlet scheme. This approach leads to an unconditionally stable, controllable and computationally light simulation [Müller et al. 2007]. Our results demonstrate the effectiveness of our model, showing the interaction of the thread with other objects in real time and the creation of complex knots.
Accurate high‐resolution simulation of cloth is a highly desired computational tool in graphics applications. As single‐resolution simulation starts to reach the limit of computational power, we believe the future of cloth simulation is in multi‐resolution simulation. In this paper, we explore nonlinearity, adaptive smoothing, and parallelization under a full multigrid (FMG) framework. The foundation of this research is a novel nonlinear FMG method for unstructured meshes. To introduce nonlinearity into FMG, we propose to formulate the smoothing process at each resolution level as the computation of a search direction for the original high‐resolution nonlinear optimization problem. We prove that our nonlinear FMG is guaranteed to converge under various conditions and we investigate the improvements to its performance. We present an adaptive smoother which is used to reduce the computational cost in the regions with low residuals already. Compared to normal iterative solvers, our nonlinear FMG method provides faster convergence and better performance for both Newton's method and Projective Dynamics. Our experiment shows our method is efficient, accurate, stable against large time steps, and friendly with GPU parallelization. The performance of the method has a good scalability to the mesh resolution, and the method has good potential to be combined with multi‐resolution collision handling for real‐time simulation in the future.
We propose an integrated facial dynamics model addressing the animation of 3D humanoid faces in real time. The computational model mimics facial motion by reproducing the layered anatomical structure of a human head including the bony structure, overlapping facial muscles and the skin. The model is flexible enough to animate face meshes of various shape, connectivity, and scale. Different from previously proposed approaches based on massspring networks, overshooting problems are avoided by simulating the dynamics through a position-based scheme, which allows for real-time performance, control, and robustness. Experiments demonstrate that convincing expressive facial animation can be interactively prototyped on consumer class platforms. Copyright (C) 2012 John Wiley & Sons, Ltd
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.