Human motions are the product of internal and external forces, but these forces are very difficult to measure in a general setting. Given a motion capture trajectory, we propose a method to reconstruct its open-loop control and the implicit contact forces. The method employs a strategy based on randomized sampling of the control within user-specified bounds, coupled with forward dynamics simulation. Sampling-based techniques are well suited to this task because of their lack of dependence on derivatives, which are difficult to estimate in contact-rich scenarios. They are also easy to parallelize, which we exploit in our implementation on a compute cluster. We demonstrate reconstruction of a diverse set of captured motions, including walking, running, and contact rich tasks such as rolls and kip-up jumps. We further show how the method can be applied to physically based motion transformation and retargeting, physically plausible motion variations, and referencetrajectory-free idling motions. Alongside the successes, we point out a number of limitations and directions for future work.
The difficulty of developing control strategies has been a primary bottleneck in the adoption of physics-based simulations of human motion. We present a method for learning robust feedback strategies around given motion capture clips as well as the transition paths between clips. The output is a control graph that supports real-time physics-based simulation of multiple characters, each capable of a diverse range of robust movement skills, such as walking, running, sharp turns, cartwheels, spin-kicks, and flips. The control fragments which comprise the control graph are developed using guided learning. This leverages the results of open-loop sampling-based reconstruction in order to produce state-action pairs which are then transformed into a linear feedback policy for each control fragment using linear regression. Our synthesis framework allows for the development of robust controllers with a minimal amount of prior knowledge.
We present a data-driven method for deformation capture and modeling of general soft objects. We adopt an iterative framework that consists of one component for physics-based deformation tracking and another for spacetime optimization of deformation parameters. Low cost depth sensors are used for the deformation capture, and we do not require any force-displacement measurements, thus making the data capture a cheap and convenient process. We augment a state-of-the-art probabilistic tracking method to robustly handle noise, occlusions, fast movements and large deformations. The spacetime optimization aims to match the simulated trajectories with the tracked ones. The optimized deformation model is then used to boost the accuracy of the tracking results, which can in turn improve the deformation parameter estimation itself in later iterations. Numerical experiments demonstrate that the tracking and parameter optimization components complement each other nicely. Our spacetime optimization of the deformation model includes not only the material elasticity parameters and dynamic damping coefficients, but also the reference shape which can differ significantly from the static shape for soft objects. The resulting optimization problem is highly nonlinear in high dimensions, and challenging to solve with previous methods. We propose a novel splitting algorithm that alternates between reference shape optimization and deformation parameter estimation, and thus enables tailoring the optimization of each subproblem more efficiently and robustly. Our system enables realistic motion reconstruction as well as synthesis of virtual soft objects in response to user stimulation. Validation experiments show that our method not only is accurate, but also compares favorably to existing techniques. We also showcase the ability of our system with high quality animations generated from optimized deformation parameters for a variety of soft objects, such as live plants and fabricated models.
Complex mesh models of man-made objects often consist of multiple components connected by various types of joints. We propose a joint-aware deformation framework that supports the direct manipulation of an arbitrary mix of rigid and deformable components. First we apply slippable motion analysis to automatically detect multiple types of joint constraints that are implicit in model geometry. For single-component geometry or models with disconnected components, we support user-defined virtual joints. Then we integrate manipulation handle constraints, multiple components, joint constraints, joint limits, and deformation energies into a single volumetric-cell-based space deformation problem. An iterative, parallelized Gauss-Newton solver is used to solve the resulting nonlinear optimization. Interactive deformable manipulation is demonstrated on a variety of geometric models while automatically respecting their multi-component nature and the natural behavior of their joints.
Figure 1: Our Panda model runs and responds to external perturbations at interactive rates. Our Michelin model does Kung Fu moves. AbstractIn this paper we present a physics-based framework for simulation and control of human-like skeleton-driven soft body characters. We couple the skeleton dynamics and the soft body dynamics to enable two-way interactions between the skeleton, the skin geometry, and the environment. We propose a novel pose-based plasticity model that extends the corotated linear elasticity model to achieve large skin deformation around joints. We further reconstruct controls from reference trajectories captured from human subjects by augmenting a sampling-based algorithm. We demonstrate the effectiveness of our framework by results not attainable with a simple combination of previous methods.
We propose a sketch‐based 3D shape retrieval system that is substantially more discriminative and robust than existing systems, especially for complex models. The power of our system comes from a combination of a contour‐based 2D shape representation and a robust sampling‐based shape matching scheme. They are defined over discriminative local features and applicable for partial sketches; robust to noise and distortions in hand drawings; and consistent when strokes are added progressively. Our robust shape matching, however, requires dense sampling and registration and incurs a high computational cost. We thus devise critical acceleration methods to achieve interactive performance: precomputing kNN graphs that record transformations between neighboring contour images and enable fast online shape alignment; pruning sampling and shape registration strategically and hierarchically; and parallelizing shape matching on multi‐core platforms or GPUs. We demonstrate the effectiveness of our system through various experiments, comparisons, and user studies.
We propose a novel approach for designing mid-scale layouts by optimizing with respect to human crowd properties. Given an input layout domain such as the boundary of a shopping mall, our approach synthesizes the paths and sites by optimizing three metrics that measure crowd flow properties: mobility, accessibility, and coziness. While these metrics are straightforward to evaluate by a full agent-based crowd simulation, optimizing a layout usually requires hundreds of evaluations, which would require a long time to compute even using the latest crowd simulation techniques. To overcome this challenge, we propose a novel data-driven approach where nonlinear regressors are trained to capture the relationship between the agent-based metrics, and the geometrical and topological features of a layout. We demonstrate that by using the trained regressors, our approach can synthesize crowd-aware layouts and improve existing layouts with better crowd flow properties.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.