address these problems by formulating the control of interpolations with positional constraints over time as a space-time optimization problem in the tangent space of the animation curves driving the controls. Our method has the key properties that it (1) allows the manipulation of positions and orientations over time, extending inverse kinematics, (2) does not add new keyframes that might conict with an artist's preferred keyframe style, and (3) works in the space of artist editable animation curves and hence integrates seamlessly with current pipelines. We demonstrate the utility of the technique in practice via various examples and use cases. CCS Concepts: • Computing methodologies → Animation; Graphics systems and interfaces.
Figure 1: Left: To specify a motion cycle, the user acts out several loops of the motion using a variety of capture devices. Middle: A looping motion cycle is automatically extracted from the noisy performance. Right: A custom motion representation tool, called MoCurves, allows controlling and coordinating spatial and temporal transformations from a single viewport.
Figure 1: (i.) Our Multi-Reality game starts with objects and characters from the real world. (ii.) Physical assets get animated using photo-realistic AR. (iii.) Moving a step forward in the RVC, the user interacts with a scene where physical and virtual assets coexist. (iv.
Figure 1: Our new Flow Curves interface is designed to help artists to take a scene with an ambiguous flow (left), and quickly turn it into a compelling scene (right) by simply sketching strokes-inducing whole-scene, multi-object deformations. AbstractEffective composition in visual arts relies on the principle of movement, where the viewer's eye is directed along subjective curves to a center of interest. We call these curves subjective because they may span the edges and/or center-lines of multiple objects, as well as contain missing portions which are automatically filled by our visual system. By carefully coordinating the shape of objects in a scene, skilled artists direct the viewer's attention via strong subjective curves. While traditional 2D sketching is a natural fit for this task, current 3D tools are object-centric and do not accommodate coherent deformation of multiple shapes into smooth flows. We address this shortcoming with a new sketch-based interface called Flow Curves which allows coordinating deformation across multiple objects. Core components of our method include an understanding of the principle of flow, algorithms to automatically identify subjective curve elements that may span multiple disconnected objects, and a deformation representation tailored to the view-dependent nature of scene movement. As demonstrated in our video, sketching flow curves requires significantly less time than using traditional 3D editing workflows.
Generating realistic facial animation for CG characters and digital doubles is one of the hardest tasks in animation. A typical production workflow involves capturing the performance of a real actor using mo-cap technology, and transferring the captured motion to the target digital character. This process, known as retargeting , has been used for over a decade, and typically relies on either large blendshape rigs that are expensive to create, or direct deformation transfer algorithms that operate on individual geometric elements and are prone to artifacts. We present a new method for high-fidelity offline facial performance retargeting that is neither expensive nor artifact-prone. Our two step method first transfers local expression details to the target, and is followed by a global face surface prediction that uses anatomical constraints in order to stay in the feasible shape space of the target character. Our method also offers artists with familiar blendshape controls to perform fine adjustments to the retargeted animation. As such, our method is ideally suited for the complex task of human-to-human 3D facial performance retargeting, where the quality bar is extremely high in order to avoid the uncanny valley, while also being applicable for more common human-to-creature settings. We demonstrate the superior performance of our method over traditional deformation transfer algorithms, while achieving a quality comparable to current blendshape-based techniques used in production while requiring significantly fewer input shapes at setup time. A detailed user study corroborates the realistic and artifact free animations generated by our method in comparison to existing techniques.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.