2022
DOI: 10.1371/journal.pone.0276258
|View full text |Cite
|
Sign up to set email alerts
|

Validation of deep learning-based markerless 3D pose estimation

Abstract: Deep learning-based approaches to markerless 3D pose estimation are being adopted by researchers in psychology and neuroscience at an unprecedented rate. Yet many of these tools remain unvalidated. Here, we report on the validation of one increasingly popular tool (DeepLabCut) against simultaneous measurements obtained from a reference measurement system (Fastrak) with well-known performance characteristics. Our results confirm close (mm range) agreement between the two, indicating that under specific circumst… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(1 citation statement)
references
References 18 publications
(23 reference statements)
0
1
0
Order By: Relevance
“…Thus, further iterations of the masking tool will need to be developed to mask multiple persons in one video. Finally, any automated computer vision-based tracking may be insufficiently precise 3 depending on your research questions (but see [7,17] for comparisons of video-based tracking versus device-based trackers). Fortunately, researchers can easily verify the quality of the videos and tracking performance produced by Masked-Piper.…”
Section: Discussionmentioning
confidence: 99%
“…Thus, further iterations of the masking tool will need to be developed to mask multiple persons in one video. Finally, any automated computer vision-based tracking may be insufficiently precise 3 depending on your research questions (but see [7,17] for comparisons of video-based tracking versus device-based trackers). Fortunately, researchers can easily verify the quality of the videos and tracking performance produced by Masked-Piper.…”
Section: Discussionmentioning
confidence: 99%