2022
DOI: 10.1038/s41592-022-01443-0
|View full text |Cite
|
Sign up to set email alerts
|

Multi-animal pose estimation, identification and tracking with DeepLabCut

Abstract: Estimating the pose of multiple animals is a challenging computer vision problem: frequent interactions cause occlusions and complicate the association of detected keypoints to the correct individuals, as well as having highly similar looking animals that interact more closely than in typical multi-human scenarios. To take up this challenge, we build on DeepLabCut, an open-source pose estimation toolbox, and provide high-performance animal assembly and tracking—features required for multi-animal scenarios. Fur… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
138
0
1

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 217 publications
(140 citation statements)
references
References 47 publications
1
138
0
1
Order By: Relevance
“…The deep features allow DLC to extract body parts despite various background challenges or camera distortions [ 47 ]. In addition, DLC possesses a refinement step to take advantage of different scenarios for improving tracking performance [ 48 ]. DLC can predict the points without requiring consistency across the frames.…”
Section: Discussionmentioning
confidence: 99%
“…The deep features allow DLC to extract body parts despite various background challenges or camera distortions [ 47 ]. In addition, DLC possesses a refinement step to take advantage of different scenarios for improving tracking performance [ 48 ]. DLC can predict the points without requiring consistency across the frames.…”
Section: Discussionmentioning
confidence: 99%
“…USV assignment requires frame-by-frame snout locations of each mouse. USVCAM users can choose any available high-precision video tracking software (such as DeepLabCut [ Lauer et al., 2022 ], Social LEAP Estimates Animal Poses [SLEAP; Pereira et al., 2022 ], and Mouse Action Recognition System [MARS; Segalin et al., 2021 ]) to estimate snout locations. In this study, we used AlphaTracker ( Chen et al., 2020 ) for tracking the locations of the snout and the other body parts.…”
Section: Methodsmentioning
confidence: 99%
“…1) Intra-skeleton interaction modelling: Similar to [19], [34], we first construct I types of skeleton sequences to learn behavioural information of mice. Following [21], we define the dense physical connections of all the keypoints. Then, we further design a sparse structure where keypoints in the same body part are aggregated into one keypoint by the averaging operation, as shown in Fig.…”
Section: A Cross-skeleton Node-level Interactionmentioning
confidence: 99%
“…1(a). Here, mouse skeleton refers to a list of keypoint connections [21]. Then, a novel Cross-Skeleton Node-level Interaction (CS-NLI) module, shown in Fig.…”
Section: Introductionmentioning
confidence: 99%