2020
DOI: 10.1145/3386569.3392422
|View full text |Cite
|
Sign up to set email alerts
|

Character controllers using motion VAEs

Abstract: Fig. 1. Given example data, we learn an autoregressive conditional variational autoencoder (VAE) that predicts the next pose one frame at a time. A variety of task-specific control policies can then be learned on top of this model.A fundamental problem in computer animation is that of realizing purposeful and realistic human movement given a sufficiently-rich set of motion capture clips. We learn data-driven generative models of human movement using autoregressive conditional variational autoencoders, or Motio… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
125
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 190 publications
(143 citation statements)
references
References 39 publications
0
125
0
Order By: Relevance
“…VAEs circumvent computational issues by using a variational and amortised (see Cremer et al [2018]) approximation of the likelihood for training. They have been applied to model controllable human locomotion [Habibie et al 2017;Ling et al 2020] and to generate head motion from speech [Greenwood et al 2017a,b]. Ling et al [2020] describes an autogregressive unconditional motion model based on VAEs, using a deterministic decoder based on the mixture-of-experts architecture from .…”
Section: Probabilistic Data-driven Motion Synthesismentioning
confidence: 99%
See 1 more Smart Citation
“…VAEs circumvent computational issues by using a variational and amortised (see Cremer et al [2018]) approximation of the likelihood for training. They have been applied to model controllable human locomotion [Habibie et al 2017;Ling et al 2020] and to generate head motion from speech [Greenwood et al 2017a,b]. Ling et al [2020] describes an autogregressive unconditional motion model based on VAEs, using a deterministic decoder based on the mixture-of-experts architecture from .…”
Section: Probabilistic Data-driven Motion Synthesismentioning
confidence: 99%
“…They have been applied to model controllable human locomotion [Habibie et al 2017;Ling et al 2020] and to generate head motion from speech [Greenwood et al 2017a,b]. Ling et al [2020] describes an autogregressive unconditional motion model based on VAEs, using a deterministic decoder based on the mixture-of-experts architecture from . -VAEs [Higgins et al 2016] are used to mitigate posterior collapse, while scheduled sampling ] is necessary to stabilise long-term motion generation.…”
Section: Probabilistic Data-driven Motion Synthesismentioning
confidence: 99%
“…We attempt to mitigate these drawbacks in our proposed model by formulating the internal state and latent code in two separate channels and conditioning the latent code to the previous internal state to model the temporal dependencies during test time. Recently, [LZCVDP20] et al proposed an interesting model based on VAEs where the motion is controlled by setting the latent code as the output of a deep reinforcement learning module. Unlike our method, they modelled the motion by a Markovian assumption, meaning that each pose only depends on the previous pose and the autoregressive model is memoryless.…”
Section: Related Workmentioning
confidence: 99%
“…Mixture‐of‐Experts Approaches : Another strategy exploited in [SZKS19, SZKZ20, LZCVDP20] to address the problem of mean collapse in multi‐modal motion data is to use a Mixture‐of‐Experts (MoE) network where each expert is responsible for one mode in the training data. Though effective at mitigating mean collapse, the number of parameters in these networks increases with the number of experts.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation