2021
DOI: 10.1016/j.celrep.2021.109730
|View full text |Cite
|
Sign up to set email alerts
|

Anipose: A toolkit for robust markerless 3D pose estimation

Abstract: Highlights d Open-source Python toolkit for 3D animal pose estimation, with DeepLabCut support d Enables camera calibration, filtering of trajectories, and visualization of tracked data d Tracking evaluation on calibration board, fly, mouse, and human datasets d Identifies a role for joint rotation in motor control of fly walking

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

3
132
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 162 publications
(160 citation statements)
references
References 97 publications
3
132
0
Order By: Relevance
“…This would be consistent with the higher threshold we found for infants compared to adults. These possibilities point to the need for a longitudinal study across infancy and early childhood, utilizing motion tracking to capture all infant movements ( Karashchuk et al, 2021 , Mathis et al, 2018 , Nakano et al, 2020 , Pereira et al, 2018 ), and EEG alternatives such as newly developed, cryogen-free MEG sensors ( Boto et al, 2018 , Boto et al, 2017 , Boto et al, 2016 , Iivanainen et al, 2019 , Iivanainen et al, 2017 , Knappe et al, 2014 ) which have to potential to allow acquisition of high SNR data from developmental populations.…”
Section: Discussionmentioning
confidence: 99%
“…This would be consistent with the higher threshold we found for infants compared to adults. These possibilities point to the need for a longitudinal study across infancy and early childhood, utilizing motion tracking to capture all infant movements ( Karashchuk et al, 2021 , Mathis et al, 2018 , Nakano et al, 2020 , Pereira et al, 2018 ), and EEG alternatives such as newly developed, cryogen-free MEG sensors ( Boto et al, 2018 , Boto et al, 2017 , Boto et al, 2016 , Iivanainen et al, 2019 , Iivanainen et al, 2017 , Knappe et al, 2014 ) which have to potential to allow acquisition of high SNR data from developmental populations.…”
Section: Discussionmentioning
confidence: 99%
“…These findings were enabled by a new technical tool–marker-free holistic 3D pose reconstruction and tracking of individual body parts of freely moving animals. Rather than triangulating 2D pose reconstructions (Günel et al, 2019; Mathis and Mathis, 2019; Karashchuk et al, 2021; Nath et al, 2019), FreiPose directly reconstructs body poses in 3D, resulting in higher tracking accuracy than that achieved using previous tools. Analyzing the problem holistically by fusing information from all views into a joint 3D reconstruction allowed us to surpass the commonly used methods by 49.4% regarding the median error in freely moving rats.…”
Section: Discussionmentioning
confidence: 99%
“…Second, thus far, marker-free analyses have mostly been applied in 2D settings (Graving et al, 2019; Mathis et al, 2018; Pereira et al, 2019), which do not pose the need for detailed pose reconstruction of freely moving animals covering all three spatial dimensions. When recording with multiple cameras, 2D outputs can be triangulated to reconstruct a 3D pose a posteriori (Günel et al, 2019; Mathis and Mathis, 2019; Karashchuk et al, 2021; Nath et al, 2019), but such post-processing suffers from the ambiguities in the initial 2D analysis, reducing both accuracy and reliability (Fig. 1 b left).…”
Section: Introductionmentioning
confidence: 99%
“…We validated pose trajectories by comparing each pose estimation model’s output to our manual annotations of each participant’s pose (Table 5 ). While manual annotations are susceptible to human error 55 , they are often used to evaluate markerless pose estimation performance when marker-based motion capture is not possible 30 , 56 . We used root-mean-square (RMS) error averaged across all keypoints to evaluate model performance for the 950 frames used to train the model as well as 50 annotated frames that were withheld from training.…”
Section: Technical Validationmentioning
confidence: 99%