Proceedings of the 28th ACM International Conference on Multimedia 2020
DOI: 10.1145/3394171.3413669
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Future Net

Abstract: Figure 1: Given a 20-frame walking motion prefix (white), our model can generate diversified motion: walking (yellow), walking-to-running (blue), walking-to-boxing (green), and walking-to-dancing (red), with arbitrary duration. The corresponding animation can be found in teaser.mp4 in supplementary video.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(2 citation statements)
references
References 27 publications
0
1
0
Order By: Relevance
“…Furthermore, it is expected that the performance can be further improved through attempts such as using the transformer-based model proposed in Vaswani et al (2017). Finally, we would like to conduct further research on a one-to-many generative model so that a single human behavior can be mapped to multiple robot responses by referring to Sun et al (2020), Chen et al (2020), Yu and Tapus (2020), and other studies. If the robot can learn various behavioral policies considering the user’s age or cultural differences, more natural human–robot interaction can be obtained.…”
Section: Discussionmentioning
confidence: 99%
“…Furthermore, it is expected that the performance can be further improved through attempts such as using the transformer-based model proposed in Vaswani et al (2017). Finally, we would like to conduct further research on a one-to-many generative model so that a single human behavior can be mapped to multiple robot responses by referring to Sun et al (2020), Chen et al (2020), Yu and Tapus (2020), and other studies. If the robot can learn various behavioral policies considering the user’s age or cultural differences, more natural human–robot interaction can be obtained.…”
Section: Discussionmentioning
confidence: 99%
“…This model consists of an RNN (comprised of LSTM cells) for motion synthesis combined with an adversarial neural network (similar to a GAN) for "refining" the produced motion (control) so as to be identical to the reference input motion. A new deep learning model, called Dynamic Future Net (DFN) [72], was developed to produce diversified motion (walking-torunning, walking-to-dancing, etc.) with arbitrary duration, given a short-length pose sequence (e.g.…”
Section: Diversified Motion Synthesismentioning
confidence: 99%