2022
DOI: 10.1007/978-3-031-20065-6_36
|View full text |Cite
|
Sign up to set email alerts
|

SmoothNet: A Plug-and-Play Network for Refining Human Poses in Videos

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(7 citation statements)
references
References 48 publications
0
5
0
Order By: Relevance
“…Comparison of the acceleration errors for our method and smoothnet. 24 Our method shows significantly lower acceleration errors than the SOTA method.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 77%
See 1 more Smart Citation
“…Comparison of the acceleration errors for our method and smoothnet. 24 Our method shows significantly lower acceleration errors than the SOTA method.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 77%
“…We compare the results with state-of-the-art frame-based human mesh recovery methods, [4][5][6][7][8][21][22][23][24] demonstrating the ability of our model to recover accurate and smooth 3D human motion from video. As shown in Table 1, our method shows good performance in the indoor dataset Human3.6M 18 and the challenging field datasets 3DPW 16 and MPI-INF 3DHP.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 97%
“…L θ , L β , L cam represents L1 loss between the estimated human pose, shape and camera pose to our synthetic ground truth. L smooth is adopted from [65] by penalizing the velocity and acceleration between the estimation and the ground truth. For the neural descent module, the objective loss can be written as:…”
Section: Loss Functionmentioning
confidence: 99%
“…We follow MSCOCO [32] to define the first 17 keypoints and add 4 additional keypoints, left/right fingers and left/right toes, which is beneficial for 3D pose estimation and shape recovery by providing more comprehensive constraints [50,82]. Self-contact keypoints [4,45] also demonstrate benefits for 3D pose and shape estimation by disambiguating the body part depth unknown in 2D human pose representation, avoiding self-collisions and penetrations, and being easier to use than ordinal depth [51,59,88].…”
Section: Data Annotationmentioning
confidence: 99%