2021
DOI: 10.1111/cgf.142630
|View full text |Cite
|
Sign up to set email alerts
|

Learning and Exploring Motor Skills with Spacetime Bounds

Abstract: Learning cartwheels with spacetime bounds. The top green motion shows the reference, and the bottom yellow motions are simulations. The curves represent the Y position of the character's center of mass, and are colored to represent the reference (green), the simulations (yellow), and the spacetime bounds (red). The blue region illustrates the nonuniform feasible region under the given spacetime bounds. During training, episodes are terminated immediately once any spacetime bounds are violated, as shown in the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 57 publications
0
8
0
Order By: Relevance
“…We explore a chosen lowdimensional feature space (3 ∼ 4𝐷) of the take-off states for learning diverse jumping strategies. As shown by previous work [Ma et al 2021], the take-off moment is a critical point of jumping motions, where the volume of the feasible region of the dynamic skill is the smallest. In another word, bad initial states will fail fast, which in a way help our exploration framework to find good ones quicker.…”
Section: Drl Formulationmentioning
confidence: 63%
See 2 more Smart Citations
“…We explore a chosen lowdimensional feature space (3 ∼ 4𝐷) of the take-off states for learning diverse jumping strategies. As shown by previous work [Ma et al 2021], the take-off moment is a critical point of jumping motions, where the volume of the feasible region of the dynamic skill is the smallest. In another word, bad initial states will fail fast, which in a way help our exploration framework to find good ones quicker.…”
Section: Drl Formulationmentioning
confidence: 63%
“…With the wide availability of motion capture data, many research endeavors have been focused on tracking-based controllers, which are capable of reproducing high-quality motions by imitating motion examples. Controllers for a wide range of skills have been demonstrated through trajectory optimization [da Silva et al 2008;Lee et al 2010Lee et al , 2014Muico et al 2009;Sok et al 2007;Ye and Liu 2010a], sampling-based algorithms [Liu et al 2016[Liu et al , 2015[Liu et al , 2010, and deep reinforcement learning [Liu and Hodgins 2018;Ma et al 2021;Peng et al 2018aPeng et al , 2017Peng et al , 2018bSeunghwan Lee and Lee 2019]. Tracking controllers have also been combined with kinematic motion generators to support interactive control of simulated characters [Bergamin et al 2019;Won et al 2020].…”
Section: Character Animationmentioning
confidence: 99%
See 1 more Smart Citation
“…Secondly, they reduce the search space to sensible solutions only. Some local minima of poor recovery strategies are strongly discouraged from the start, which leads to more stable learning and faster convergence [38]. Numerical values are robot-specific.…”
Section: Termination Conditionsmentioning
confidence: 99%
“…The user is only supposed to rectify it occasionally and the operational space is likely to be confined in practice. We restrict the odometry displacement over a time window ∆q x,y,ϕ instead of its instantaneous velocity [38]. It limits the drift while allowing a few recovery steps.…”
Section: ) Reference Dynamicsmentioning
confidence: 99%