2019
DOI: 10.48550/arxiv.1907.01481
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

HOnnotate: A method for 3D Annotation of Hand and Object Poses

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
4
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…The major challenge in using diverse datasets is that their annotation types vary. For example, there exist available ground-truth joint angle parameters in FreiHand [63] and HO-3D [12] datasets, while others do not contain it. Furthermore, the details of hand annotations including skeleton hierarchy and scales are also different across datasets.…”
Section: D Hand Estimation Modulementioning
confidence: 99%
See 3 more Smart Citations
“…The major challenge in using diverse datasets is that their annotation types vary. For example, there exist available ground-truth joint angle parameters in FreiHand [63] and HO-3D [12] datasets, while others do not contain it. Furthermore, the details of hand annotations including skeleton hierarchy and scales are also different across datasets.…”
Section: D Hand Estimation Modulementioning
confidence: 99%
“…HO-3D. HO-3D dataset [12] is a dataset aiming to study the interaction between hands and objects. The dataset has 3D joints and MANO pose parameters for hands, and also has 3D bounding boxes for objects the hands interact with.…”
Section: Datasetsmentioning
confidence: 99%
See 2 more Smart Citations