2022
DOI: 10.48550/arxiv.2201.04439
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Real-Time Style Modelling of Human Locomotion via Feature-Wise Transformations and Local Motion Phases

Abstract: Controlling the manner in which a character moves in a real-time animation system is a challenging task with useful applications. Existing style transfer systems require access to a reference content motion clip, however, in real-time systems the future motion content is unknown and liable to change with user input. In this work we present a style modelling system that uses an animation synthesis network to model motion content based on local motion phases. An additional style modulation network uses feature-w… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 36 publications
0
6
0
Order By: Relevance
“…KM-CGs uses keyframes from the source domain as context constraints and employ Manifold Constrained Gradient (Chung et al 2022) to enforce these constraints during the second phase of DDIBs. Our experiments and quantative analysis on the 100STYLE (Mason, Starke, and Komura 2022) locomotion database, and the AIST++ (Tsuchida et al 2019) dance database find KMCGs achieves successful style transfer and better content preservation, reflected on the cycle consistency property on these two motion datasets and two probabilistic divergence-based metrics. Additionally, our human study as a subjective evaluation find samples generated from KMCGs are preferred in general.…”
Section: Introductionmentioning
confidence: 78%
“…KM-CGs uses keyframes from the source domain as context constraints and employ Manifold Constrained Gradient (Chung et al 2022) to enforce these constraints during the second phase of DDIBs. Our experiments and quantative analysis on the 100STYLE (Mason, Starke, and Komura 2022) locomotion database, and the AIST++ (Tsuchida et al 2019) dance database find KMCGs achieves successful style transfer and better content preservation, reflected on the cycle consistency property on these two motion datasets and two probabilistic divergence-based metrics. Additionally, our human study as a subjective evaluation find samples generated from KMCGs are preferred in general.…”
Section: Introductionmentioning
confidence: 78%
“…With per-frame user controls, the algorithm generates new stylized motions on the fly, in an autoregressive fashion. A similar work on this topic is [12].…”
Section: Introductionmentioning
confidence: 90%
“…Henter et al [2020] propose another generative model for motion based on normalizing flow. Neural networks succeed in a variety of motion generation tasks: motion retargeting [Aberman et al 2020a[Aberman et al , 2019Villegas et al 2018], motion style transfer [Aberman et al 2020b;Mason et al 2022], key-frame based motion generation [Harvey et al 2020], motion matching [Holden et al 2020] and animation layering [Starke et al 2021]. It is worth noting that the success of deep learning methods hinges upon large and comprehensive mocap datasets.…”
Section: Related Workmentioning
confidence: 99%