2021
DOI: 10.48550/arxiv.2110.10899
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

LARNet: Latent Action Representation for Human Action Synthesis

Abstract: We present LARNet, a novel end-to-end approach for generating human action videos. A joint generative modeling of appearance and dynamics to synthesize a video is very challenging and therefore recent works in video synthesis have proposed to decompose these two factors. However, these methods require a driving video to model the video dynamics. In this work, we propose a generative approach instead, which explicitly learns action dynamics in latent space avoiding the need of a driving video during inference. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 45 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?