We present a framework that enables the discovery of diverse and natural-looking motion strategies for athletic skills such as the high jump. The strategies are realized as control policies for physics-based characters. Given a task objective and an initial character configuration, the combination of physics simulation and deep reinforcement learning (DRL) provides a suitable starting point for automatic control policy training. To facilitate the learning of realistic human motions, we propose a Pose Variational Autoencoder (P-VAE) to constrain the actions to a subspace of natural poses. In contrast to motion imitation methods, a rich variety of novel strategies can naturally emerge by exploring initial character states through a sample-efficient Bayesian diversity search (BDS) algorithm. A second stage of optimization that encourages novel policies can further enrich the unique strategies discovered. Our method allows for the discovery of diverse and novel strategies for athletic jumping motions such as high jumps and obstacle jumps with no motion examples and less reward engineering than prior work.
Learning cartwheels with spacetime bounds. The top green motion shows the reference, and the bottom yellow motions are simulations. The curves represent the Y position of the character's center of mass, and are colored to represent the reference (green), the simulations (yellow), and the spacetime bounds (red). The blue region illustrates the nonuniform feasible region under the given spacetime bounds. During training, episodes are terminated immediately once any spacetime bounds are violated, as shown in the bottom simulation.
We propose a novel technique to provide multiuser real walking experiences with physical interactions in virtual reality (VR) applications. In our system, multiple users walk freely while navigating a large virtual environment within a smaller physical workspace. These users can interact with other real users or physical props in the same physical locations. The key of our method is a redirected smooth mapping that incorporates the redirected walking technique to warp the input virtual scene with small bends and low distance distortion. Users possess a wide field of view to explore the mapped virtual environment while being redirected in the real workspace. To keep multiple users away from the overlaps of the mapped virtual scenes, we present an automatic collision avoidance technique based on dynamic virtual avatars. These avatars naturally appear, move, and disappear, producing as little influence as possible on users’ walking experiences. We evaluate our multiuser real walking system through formative user studies, and demonstrate the capability and practicability of our technique in two multiuser applications.
In character animation, direction invariance is a desirable property. That is, a pose facing north and the same pose facing south are considered the same; a character that can walk to the north is expected to be able to walk to the south in a similar style. To achieve such direction invariance, the current practice is to remove the facing direction's rotation around the vertical axis before further processing. Such a scheme, however, is not robust for rotational behaviors in the sagittal plane. In search of a smooth scheme to achieve direction invariance, we prove that in general a singularity free scheme does not exist. We further connect the problem with the hairy ball theorem, which is better‐known to the graphics community. Due to the nonexistence of a singularity free scheme, a general solution does not exist and we propose a remedy by using a properly‐chosen motion direction that can avoid singularities for specific motions at hand. We perform comparative studies using two deep‐learning based methods, one builds kinematic motion representations and the other learns physics‐based controls. The results show that with our robust direction invariant features, both methods can achieve better results in terms of learning speed and/or final quality. We hope this paper can not only boost performance for character animation methods, but also help related communities currently not fully aware of the direction invariance problem to achieve more robust results.
Learning dexterous manipulation skills is a long-standing challenge in computer graphics and robotics, especially when the task involves complex and delicate interactions between the hands, tools and objects. In this paper, we focus on chopsticks-based object relocation tasks, which are common yet demanding. The key to successful chopsticks skills is steady gripping of the sticks that also supports delicate maneuvers. We automatically discover physically valid chopsticks holding poses by Bayesian Optimization (BO) and Deep Reinforcement Learning (DRL), which works for multiple gripping styles and hand morphologies without the need of example data. Given as input the discovered gripping poses and desired objects to be moved, we build physics-based hand controllers to accomplish relocation tasks in two stages. First, kinematic trajectories are synthesized for the chopsticks and hand in a motion planning stage. The key components of our motion planner include a grasping model to select suitable chopsticks configurations for grasping the object, and a trajectory optimization module to generate collision-free chopsticks trajectories. Then we train physics-based hand controllers through DRL again to track the desired kinematic trajectories produced by the motion planner. We demonstrate the capabilities of our framework by relocating objects of various shapes and sizes, in diverse gripping styles and holding positions for multiple hand morphologies. Our system achieves faster learning speed and better control robustness, when compared to vanilla systems that attempt to learn chopstick-based skills without a gripping pose optimization module and/or without a kinematic motion planner. Our code and models are available at this link. 1
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.