2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00554
|View full text |Cite
|
Sign up to set email alerts
|

AMASS: Archive of Motion Capture As Surface Shapes

Abstract: Large datasets are the cornerstone of recent advances in computer vision using deep learning. In contrast, existing human motion capture (mocap) datasets are small and the motions limited, hampering progress on learning models of human motion. While there are many different datasets available, they each use a different parameterization of the body, making it difficult to integrate them into a single meta dataset. To address this, we introduce AMASS, a large and varied database of human motion that unifies 15 d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
481
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 785 publications
(535 citation statements)
references
References 33 publications
2
481
0
1
Order By: Relevance
“…To do so, we make several significant improvements over SMPLify. Specifically, we learn a new, and better performing, pose prior from a large dataset of motion capture data [47,50] using a variational auto-encoder. This prior is critical because the mapping from 2D features to 3D pose is ambiguous.…”
Section: Introductionmentioning
confidence: 99%
“…To do so, we make several significant improvements over SMPLify. Specifically, we learn a new, and better performing, pose prior from a large dataset of motion capture data [47,50] using a variational auto-encoder. This prior is critical because the mapping from 2D features to 3D pose is ambiguous.…”
Section: Introductionmentioning
confidence: 99%
“…We employ our new quantitative dataset with mesh pseudo ground-truth based on Vicon and MoSh++ [41], as described in Section 4. The first row with only E J is an RGBonly baseline similar to SMPLify-X [49], that we adapt to our needs by using a fixed camera and estimating body translation γ, and gives the biggest "PJE" and "V2V" error.…”
Section: (A)mentioning
confidence: 99%
“…To find such poses, we use 3D MoCap datasets [43,44,45] that capture 3D MoCap marker positions, glued onto the skin surface of real human subjects. We then employ MoSh [16,17] that fits our body model to these 3D markers by optimizing over parameters of the body model for articulated pose, translation and shape. The pose specifically is a vector of axis-angle parameters, that describes how to rotate each body part around its corresponding skeleton joint.…”
Section: Human Body Generationmentioning
confidence: 99%
“…We then place humans on random indoor backgrounds and simulate human activities like running, walking, dancing etc. using motion capture data [16,17]. Thus, we create a large virtual dataset that captures the statistics of natural human motion in multi-person scenarios.…”
Section: Introductionmentioning
confidence: 99%