2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00235
|View full text |Cite
|
Sign up to set email alerts
|

Optimizing Network Structure for 3D Human Pose Estimation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
148
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 198 publications
(149 citation statements)
references
References 21 publications
1
148
0
Order By: Relevance
“…Especially, we choose a threshold of 150mm in the calculation of PCK. Table 3 shows the quantitative results of different scenes from which our results are closest to the method of [27]. In outdoor scenes, our result is 5.7% higher than Chang et al [44].…”
Section: ) For Mpi-inf-3dhpmentioning
confidence: 60%
See 3 more Smart Citations
“…Especially, we choose a threshold of 150mm in the calculation of PCK. Table 3 shows the quantitative results of different scenes from which our results are closest to the method of [27]. In outdoor scenes, our result is 5.7% higher than Chang et al [44].…”
Section: ) For Mpi-inf-3dhpmentioning
confidence: 60%
“…On average our approach reduces estimation error by 6.8 millimeters under protocol-I, compared to the method [13]. We are also aware of the fact that our method will not outperform these methods based on the weight nonsharing mechanism in graphs [27]. Because weight nonsharing mechanism enhances the model's complexity and representation capability.…”
Section: ) For Human 36mmentioning
confidence: 91%
See 2 more Smart Citations
“…And the approaches either utilize example-based re-finement or rely on very strong assumption including scaled orthographic cameras, or calibrated perspective cameras [4], [5]. Recently, inspired by the success of deep convolutional neural network (DCNN), lots of DCNN-based 3D human pose estimation methods are also proposed [6]- [10]. The 3D pose estimation methods can be roughly categorized into two kinds: one-step model and two-steps model.…”
Section: Introductionmentioning
confidence: 99%