2020 Joint 9th International Conference on Informatics, Electronics &Amp; Vision (ICIEV) and 2020 4th International Conference 2020
DOI: 10.1109/icievicivpr48672.2020.9306581
|View full text |Cite
|
Sign up to set email alerts
|

Performance Evaluation of Markerless 3D Skeleton Pose Estimates with Pop Dance Motion Sequence

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 7 publications
0
4
0
Order By: Relevance
“…A large part of studies investigating 3D joint center estimation choose to triangulate the output of OpenPose [ 13 ], a deep-learning algorithm estimating 2D joint coordinates from videos. Their MPJPE usually lies between 30 and 40 mm [ 14 , 15 , 16 ]. Ankle MPJPEs are within the margin of error of marker-based technologies (1–15 mm), whereas knee and hip MPJPEs are greater (30–50 mm).…”
Section: Introductionmentioning
confidence: 99%
“…A large part of studies investigating 3D joint center estimation choose to triangulate the output of OpenPose [ 13 ], a deep-learning algorithm estimating 2D joint coordinates from videos. Their MPJPE usually lies between 30 and 40 mm [ 14 , 15 , 16 ]. Ankle MPJPEs are within the margin of error of marker-based technologies (1–15 mm), whereas knee and hip MPJPEs are greater (30–50 mm).…”
Section: Introductionmentioning
confidence: 99%
“…Most methods take OpenPose as an input for triangulation, and more specifically the body_25 model. Labuguen et al evaluated 3D joint positions of a pop dancer with a simple Direct Linear Transform triangulation (DLT [ 44 , 45 ]) from four cameras [ 46 ]. Apart from the upper body for which error increases to almost 700 mm, the average joint position error is about 100 mm.…”
Section: Introductionmentioning
confidence: 99%
“…Such experiments have also attracted the attention of researchers. In 2020, Rollyn T. Labuguen et al [24] investigated the performance of the human pose recognition framework OpenPose, comparing the output joint position estimated by OpenPose with the mark-based motion capture data recorded on popular dance movements. Their comparison results show that the average absolute error for each key point is less than 700 mm.…”
Section: Human Action Recognitionmentioning
confidence: 99%