Figure 1: Real-time physics-based simulation of walking. The method provides robust control across a range of gaits, styles, characters, and skills. Motions are easily authored by novice users. AbstractWe present a control strategy for physically-simulated walking motions that generalizes well across gait parameters, motion styles, character proportions, and a variety of skills. The control is realtime, requires no character-specific or motion-specific tuning, is robust to disturbances, and is simple to compute. The method works by integrating tracking, using proportional-derivative control; foot placement, using an inverted pendulum model; and adjustments for gravity and velocity errors, using Jacobian transpose control. Highlevel gait parameters allow for forwards-and-backwards walking, various walking speeds, turns, walk-to-stop, idling, and stop-towalk behaviors. Character proportions and motion styles can be authored interactively, with edits resulting in the instant realization of a suitable controller. The control is further shown to generalize across a variety of walking-related skills, including picking up objects placed at any height, lifting and walking with heavy crates, pushing and pulling crates, stepping over obstacles, ducking under obstacles, and climbing steps.
Figure 1: Real-time physics-based simulation of walking. The method provides robust control across a range of gaits, styles, characters, and skills. Motions are easily authored by novice users. AbstractWe present a control strategy for physically-simulated walking motions that generalizes well across gait parameters, motion styles, character proportions, and a variety of skills. The control is realtime, requires no character-specific or motion-specific tuning, is robust to disturbances, and is simple to compute. The method works by integrating tracking, using proportional-derivative control; foot placement, using an inverted pendulum model; and adjustments for gravity and velocity errors, using Jacobian transpose control. Highlevel gait parameters allow for forwards-and-backwards walking, various walking speeds, turns, walk-to-stop, idling, and stop-towalk behaviors. Character proportions and motion styles can be authored interactively, with edits resulting in the instant realization of a suitable controller. The control is further shown to generalize across a variety of walking-related skills, including picking up objects placed at any height, lifting and walking with heavy crates, pushing and pulling crates, stepping over obstacles, ducking under obstacles, and climbing steps.
(a) Go-to-line (b) Heading (c) Go-to-point (d) Point-with-heading (e) Heading-and-speed (f) Very Robust Walk Figure 1: We precompute task-specific control policies for real-time physics-based characters. The character moves efficiently towards the current goal, responds interactively to changes of the goal, and can respond to significant physical interaction with the environment. AbstractWe present a method for precomputing robust task-based control policies for physically simulated characters. This allows for characters that can demonstrate skill and purpose in completing a given task, such as walking to a target location, while physically interacting with the environment in significant ways. As input, the method assumes an abstract action vocabulary consisting of balance-aware, step-based controllers. A novel constrained state exploration phase is first used to define a character dynamics model as well as a finite volume of character states over which the control policy will be defined. An optimized control policy is then computed using reinforcement learning. The final policy spans the cross-product of the character state and task state, and is more robust than the conrollers it is constructed from. We demonstrate real-time results for six locomotion-based tasks and on three highly-varied bipedal characters. We further provide a game-scenario demonstration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.