Proceedings of the 2021 International Conference on Multimodal Interaction 2021
DOI: 10.1145/3462244.3479961
|View full text |Cite
|
Sign up to set email alerts
|

Attention-based Multimodal Feature Fusion for Dance Motion Generation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 5 publications
0
2
0
Order By: Relevance
“…In this section we describe the applied experimental setup along with the different experimental scenarios that we employed for evaluating the performance of our proposed multimodal DCHC autoencoder in terms of realism and style consistency. Herein we extend our previous study [32] by training the considered multimodal architecture based on a curriculum learning strategy. Furthermore we provide a thorough assessment of our proposal by employing 2 related state-of-the-art frameworks, in order to generate numerous motion sequences that were used to conduct qualitative, quantitative and subjective evaluations.…”
Section: Audio-informed Dance Synthesis Evaluationmentioning
confidence: 83%
See 1 more Smart Citation
“…In this section we describe the applied experimental setup along with the different experimental scenarios that we employed for evaluating the performance of our proposed multimodal DCHC autoencoder in terms of realism and style consistency. Herein we extend our previous study [32] by training the considered multimodal architecture based on a curriculum learning strategy. Furthermore we provide a thorough assessment of our proposal by employing 2 related state-of-the-art frameworks, in order to generate numerous motion sequences that were used to conduct qualitative, quantitative and subjective evaluations.…”
Section: Audio-informed Dance Synthesis Evaluationmentioning
confidence: 83%
“…Furthermore, by employing an attention mechanism we fuse the latent representations of past skeletal poses and audio features, in order to stochastically generate novel, variable and complex motion patters, enhancing the overall creativity of our system. Extending our previous work [32], the main contributions of this paper are summarized as follows:…”
Section: Introductionmentioning
confidence: 92%