2019
DOI: 10.1111/cgf.13832
|View full text |Cite
|
Sign up to set email alerts
|

Towards Robust Direction Invariance in Character Animation

Abstract: In character animation, direction invariance is a desirable property. That is, a pose facing north and the same pose facing south are considered the same; a character that can walk to the north is expected to be able to walk to the south in a similar style. To achieve such direction invariance, the current practice is to remove the facing direction's rotation around the vertical axis before further processing. Such a scheme, however, is not robust for rotational behaviors in the sagittal plane. In search of a … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 38 publications
0
8
0
Order By: Relevance
“…In principle, we could also use direction-invariant features as in DeepMimic, and include the relative transformation to the obstacle into the feature set. However, as proved in [Ma et al 2019], there are no direction-invariant features that are always singularity free. Direction-invariant features change wildly whenever the character's facing direction approaches the chosen motion direction, which is usually the global up-direction or the 𝑌 -axis.…”
Section: Drl Formulationmentioning
confidence: 97%
“…In principle, we could also use direction-invariant features as in DeepMimic, and include the relative transformation to the obstacle into the feature set. However, as proved in [Ma et al 2019], there are no direction-invariant features that are always singularity free. Direction-invariant features change wildly whenever the character's facing direction approaches the chosen motion direction, which is usually the global up-direction or the 𝑌 -axis.…”
Section: Drl Formulationmentioning
confidence: 97%
“…Even with abundant weights and extensive training, they still occasionally produce big errors. In previous work, the efficiency of deep learning approaches has been found to be highly dependent upon the quality of the dataset and the representation used [Ma et al 2019;Xiang and Li 2020;Zhou et al 2019], due to the fact that the network learns motion patterns from the data. A rich encoding of motion, based on a concrete mathematical model, encourages the network to properly disentangle complex characteristics of motion which are present in the data.…”
Section: Related Workmentioning
confidence: 99%
“…Direction Invariant Features Within the skeleton-based action recognition literature, some works use raw joint positions as input [42]; some transform joint positions to a semi-local coordinate system which is close to character [29,31,14]. We adopt the principled way of transforming input features into Direction Invariant Features(DIF) described in [24]. Our experiments show that this simple preprocessing alone can boost performance by a large margin.…”
Section: Related Workmentioning
confidence: 99%
“…Various input features have been employed in the literature, such as global joint positions [42], and joint positions transformed into a semi-local coordinate system where the origin is defined by the character's position in the first frame of the clip, and the axes defined by the shoulder joints and the gravitational direction [31]. We follow the principled way of calculating Direction Invariant Features (DIF) proposed in [24], which outperforms global features for various tasks in character animation.…”
Section: Dif Feature Calculation and Data Preprocessingmentioning
confidence: 99%
See 1 more Smart Citation