2022
DOI: 10.48550/arxiv.2205.01906
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

ASE: Large-Scale Reusable Adversarial Skill Embeddings for Physically Simulated Characters

Xue Bin Peng,
Yunrong Guo,
Lina Halper
et al.

Abstract: embedding then enables the character to automatically synthesize complex and naturalistic strategies in order to achieve the task objectives. CCS Concepts: • Computing methodologies → Procedural animation; Control methods; Adversarial learning.

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 47 publications
0
3
0
Order By: Relevance
“…In [112] and [113], the reward is defined as the distance to the goal in the learned latent space, where the goal is chosen as the final state in the demonstration trajectory. Recent works on motion imitation [114]- [116] carefully design the reward as a weighted distance to the reference state, taking joint orientations, joint velocities, end effectors, and centers of mass into account.…”
Section: Demo As Reference For Rewardmentioning
confidence: 99%
“…In [112] and [113], the reward is defined as the distance to the goal in the learned latent space, where the goal is chosen as the final state in the demonstration trajectory. Recent works on motion imitation [114]- [116] carefully design the reward as a weighted distance to the reference state, taking joint orientations, joint velocities, end effectors, and centers of mass into account.…”
Section: Demo As Reference For Rewardmentioning
confidence: 99%
“…Similar to AMP, Adversarial Skill Embedding (ASE) [4] also adapted discriminator. However, unlike AMP, ASE proposed two stages: a pre-training stage for low-level policy which is conditioned by state and embedded skills, and a transferring embedded skills stage for high-level policy.…”
Section: Gan In Character Animationmentioning
confidence: 99%
“…To cooperate interactions without explicit programming, Generative Adversarial Imitation Learning [2] is an alternative to imitation learning. Adversarial Motion Prior [3] and Adversarial Skill Embeddings [4] are two notable approaches that apply generative adversarial imitation learning to the field of character animation and have shown remarkable results in generating natural motions. From the perspective of a generative model, these two models leverage Generative Adversarial Networks to provide information on the fake distribution.…”
Section: Introductionmentioning
confidence: 99%
“…Robots also struggle to generalize or adapt to other environments or tasks. To alleviate this problem to a certain extent, there have been recent DRL studies based on motion priors [86][87][88][89][90] , which have been successfully applied to quadrupedal locomotion tasks [12,56,91] . However, the variety of motion priors in these studies is insufficient, and the robot' s behavior is not agile and natural.…”
Section: Reuse Of Motion Priors Datamentioning
confidence: 99%