2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.01206
|View full text |Cite
|
Sign up to set email alerts
|

Behavior-Driven Synthesis of Human Dynamics

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

2
4

Authors

Journals

citations
Cited by 10 publications
(12 citation statements)
references
References 29 publications
0
12
0
Order By: Relevance
“…Other works rely on a low-dimensional, parametric representation e.g. keypoints to transfer motion between videos [1,5] or to synthesize videos based on action labels [82]. Given such assumptions, these works cannot be universally applied to arbitrary object categories and allow only for coarse control compared to our fine-grained, local object manipulations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Other works rely on a low-dimensional, parametric representation e.g. keypoints to transfer motion between videos [1,5] or to synthesize videos based on action labels [82]. Given such assumptions, these works cannot be universally applied to arbitrary object categories and allow only for coarse control compared to our fine-grained, local object manipulations.…”
Section: Related Workmentioning
confidence: 99%
“…Controlled Video Synthesis Model. The method of Hao et al [27] is implemented based on the official code 5 and the provided hyperparameters for all used datasets. We used their proposed procedure to construct the motion trajectories based on the same optical flow which was we used to train our own model.…”
Section: Appendix Preliminariesmentioning
confidence: 99%
“…To reduce complexity, existing learning based approaches often focus on modelling human dynamics using low-dimensional, parametric representations such as keypoints [1,87,3], thus preventing universal applicability. Moreover, as these approaches are either based on explicit action labels or require motion sequences as input, they cannot be applied to controlling single body parts.…”
Section: Related Workmentioning
confidence: 99%
“…We provide four videos (poking_plants_ [1][2][3][4].mp4) for the PokingPlants dataset showing distinct types of plants of substantially different shapes and appearances. Despite these large variances, our model generates realistic and appealing visualizations which are plausible responses to the poke.…”
Section: A1 Pokingplantsmentioning
confidence: 99%
See 1 more Smart Citation