We present a physics-based approach to synthesizing motion of a virtual character in a dynamically varying environment. Our approach views the motion of a responsive virtual character as a sequence of solutions to the constrained optimization problem formulated at every time step. This framework allows the programmer to specify active control strategies using intuitive kinematic goals, significantly reducing the engineering effort entailed in active body control. Our optimization framework can incorporate changes in the character's surroundings through a synthetic visual sensory system and create significantly different motions in response to varying environmental stimuli. Our results show that our approach is general enough to encompass a wide variety of highly interactive motions.
Capturing human activities that involve both gross full-body motion and detailed hand manipulation of objects is challenging for standard motion capture systems. We introduce a new method for creating natural scenes with such human activities. The input to our method includes motions of the full-body and the objects acquired simultaneously by a standard motion capture system. Our method then automatically synthesizes detailed and physically plausible hand manipulation that can seamlessly integrate with the input motions. Instead of producing one "optimal" solution, our method presents a set of motions that exploit a wide variety of manipulation strategies. We propose a randomized sampling algorithm to search for as many as possible visually diverse solutions within the computational time budget. Our results highlight complex strategies human hands employ effortlessly and unconsciously, such as static, sliding, rolling, as well as finger gaits with discrete relocation of contact points.
Figure 1: Our algorithm can handle complex balancing and manipulation tasks while adapting to user interactions. All our demonstrated movements emerge from simple cost functions without animation data or offline precomputation. More examples can be found in the supplemental video and on the project homepage. AbstractWe present a novel, general-purpose Model-Predictive Control (MPC) algorithm that we call Control Particle Belief Propagation (C-PBP). C-PBP combines multimodal, gradient-free sampling and a Markov Random Field factorization to effectively perform simultaneous path finding and smoothing in high-dimensional spaces. We demonstrate the method in online synthesis of interactive and physically valid humanoid movements, including balancing, recovery from both small and extreme disturbances, reaching, balancing on a ball, juggling a ball, and fully steerable locomotion in an environment with obstacles. Such a large repertoire of movements has not been demonstrated before at interactive frame rates, especially considering that all our movement emerges from simple cost functions. Furthermore, we abstain from using any precomputation to train a control policy offline, reference data such as motion capture clips, or state machines that break the movements down into more manageable subtasks. Operating under these conditions enables rapid and convenient iteration when designing the cost functions.
Real-time adaptation of a motion capture sequence to virtual environments with physical perturbations requires robust control strategies. This paper describes an optimal feedback controller for motion tracking that allows for on-the-fly re-planning of long-term goals and adjustments in the final completion time. We first solve an offline optimal trajectory problem for an abstract dynamic model that captures the essential relation between contact forces and momenta. A feedback control policy is then derived and used to simulate the abstract model online. Simulation results become dynamic constraints for online reconstruction of full-body motion from a reference. We applied our controller to a wide range of motions including walking, long stepping, and a squat exercise. Results show that our controllers are robust to large perturbations and changes in the environment.
Synthesizing the movements of a responsive virtual character in the event of unexpected perturbations has proven a difficult challenge. To solve this problem, we devise a fully automatic method that learns a nonlinear probabilistic model of dynamic responses from very few perturbed walking sequences. This model is able to synthesize responses and recovery motions under new perturbations different from those in the training examples. When perturbations occur, we propose a physics-based method that initiates motion transitions to the most probable response example based on the dynamic states of the character. Our algorithm can be applied to any motion sequences without the need for preprocessing such as segmentation or alignment. The results show that three perturbed motion clips can sufficiently generate a variety of realistic responses, and 14 clips can create a responsive virtual character that reacts realistically to external forces in different directions applied on different body parts at different moments in time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.