We present a modular differentiable renderer design that yields performance superior to previous methods by leveraging existing, highly optimized hardware graphics pipelines. Our design supports all crucial operations in a modern graphics pipeline: rasterizing large numbers of triangles, attribute interpolation, filtered texture lookups, as well as user-programmable shading and geometry processing, all in high resolutions. Our modular primitives allow custom, high-performance graphics pipelines to be built directly within automatic differentiation frameworks such as PyTorch or TensorFlow. As a motivating application, we formulate facial performance capture as an inverse rendering problem and show that it can be solved efficiently using our tools. Our results indicate that this simple and straightforward approach achieves excellent geometric correspondence between rendered results and reference imagery.
The goal of a practical facial animation retargeting system is to reproduce the character of a source animation on a target face while providing room for additional creative control by the animator. This article presents a novel spacetime facial animation retargeting method for blendshape face models. Our approach starts from the basic principle that the source and target movements should be similar. By interpreting movement as the derivative of position with time, and adding suitable boundary conditions, we formulate the retargeting problem as a Poisson equation. Specified (e.g., neutral) expressions at the beginning and end of the animation as well as any user-specified constraints in the middle of the animation serve as boundary conditions. In addition, a model-specific prior is constructed to represent the plausible expression space of the target face during retargeting. A Bayesian formulation is then employed to produce target animation that is consistent with the source movements while satisfying the prior constraints. Since the preservation of temporal derivatives is the primary goal of the optimization, the retargeted motion preserves the rhythm and character of the source movement and is free of temporal jitter. More importantly, our approach provides spacetime editing for the popular blendshape representation of facial models, exhibiting smooth and controlled propagation of user edits across surrounding frames.
Figure 1: Many pedestrians walk straight in the crowd animation (left). We interactively manipulate the crowd animation to follow an s-curve (right). AbstractEditing large-scale crowd animation is a daunting task due to the lack of an efficient manipulation method. This paper presents a novel cage-based editing method for large-scale crowd animation. The cage encloses animated characters and supports convenient space/time manipulation methods that were unachievable with previous approaches. The proposed method is based on a combination of cage-based deformation and as-rigid-as-possible deformation with a set of constraints integrated into the system to produce desired results. Our system allows animators to edit existing crowd animations intuitively with real-time performance while maintaining complex interactions between individual characters. Our examples demonstrate how our cage-based user interfaces mitigate the time and effort for the user to manipulate large crowd animation.
Facial motion retargeting has been developed mainly in the direction of representing high fidelity between a source and a target model. We present a novel facial motion retargeting method that properly regards the significant characteristics of target face model. We focus on stylistic facial shapes and timings that reveal the individuality of the target model well, after the retargeting process is finished. The method works with a range of expression pairs between the source and the target facial expressions and emotional sequence pairs of the source and the target facial motions. We first construct a prediction model to place semantically corresponding facial shapes. Our hybrid retargeting model, which combines the radial basis function (RBF) and kernel canonical correlation analysis (kCCA)-based regression methods copes well with new input source motions without visual artifacts. 1D Laplacian motion warping follows after the shape retargeting process, replacing stylistically important emotional sequences and thus, representing the characteristics of the target face.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.