2020
DOI: 10.1101/2020.08.31.276246
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SLEAP: Multi-animal pose tracking

Abstract: The desire to understand how the brain generates and patterns behavior has driven rapid methodological innovation to quantify and model natural animal behavior. This has led to important advances in deep learning-based markerless pose estimation that have been enabled in part by the success of deep learning for computer vision applications. Here we present SLEAP (Social LEAP Estimates Animal Poses), a framework for multi-animal pose tracking via deep learning. This system is capable of simultaneously tracking … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
81
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 84 publications
(82 citation statements)
references
References 30 publications
0
81
0
Order By: Relevance
“…These clusters were then fit with an ellipse to identify the centroid of each animal. In the second method, we trained a deep convolutional network to detect all instances of individual body parts (head, thorax) within each frame using a modified version of LEAP ( Pereira et al, 2019 , or SLEAP [ Pereira et al, 2020 ] https://sleap.ai/ ; 544 labeled frames were used for training; Figure 1—figure supplement 1Ai-ii and Video 1 ). Using the same software and neural network architecture, a separate network was then trained to group these detections together with the correct animals by inferring part affinity fields ( Figure 1—figure supplement 1Aiii ; Cao et al, 2017 ).…”
Section: Methodsmentioning
confidence: 99%
“…These clusters were then fit with an ellipse to identify the centroid of each animal. In the second method, we trained a deep convolutional network to detect all instances of individual body parts (head, thorax) within each frame using a modified version of LEAP ( Pereira et al, 2019 , or SLEAP [ Pereira et al, 2020 ] https://sleap.ai/ ; 544 labeled frames were used for training; Figure 1—figure supplement 1Ai-ii and Video 1 ). Using the same software and neural network architecture, a separate network was then trained to group these detections together with the correct animals by inferring part affinity fields ( Figure 1—figure supplement 1Aiii ; Cao et al, 2017 ).…”
Section: Methodsmentioning
confidence: 99%
“…However, this approach is limited to specific behaviors and does not apply to interaction behaviors between social subjects of unequal status. Recent cutting-edge toolboxes such as DLC for multi-animal pose estimation 17 , SLEAP 69 , and AlphaTracker 70 have addressed the multi-animal tracking problem, but once animals with similar appearances are touching or even body-occluded, the inaccurate pose estimation of these toolboxes leads to off-tracking and identity-swapping errors. This is because when estimating multiple body parts of several animals in a single frame, the combination of the poses of these animals is more complex and diverse, and identity-swapping in different views may happen at different times.…”
Section: Discussionmentioning
confidence: 99%
“…However, intricate behaviors, like courtship displays, can only be fully observed once the body shape and orientation are considered (e.g. using tools such as Deep-PoseKit, Graving et al 2019 , LEAP Pereira et al (2019 )/SLEAP Pereira et al (2020 ), and DeepLab-Cut, Mathis et al 2018 ). does not track individual body parts apart from the head and tail (where applicable), but even the included simple and fast 2D posture estimator already allows for deductions to be made about how an animal is positioned in space, bent and oriented – crucial e.g.…”
Section: Methodsmentioning
confidence: 99%
“…Strandburg-Peshkin et al 2013, Rosenthal et al 2015 ). When detailed tracking of all extremities is required, offers an option that allows it to interface with third-party soft-ware like DeepPoseKit ( Graving et al 2019 ), SLEAP ( Pereira et al 2020 ), or DeepLabCut ( Mathis et al 2018). This option (output_image_per_tracklet), when set to true, exports cropped and (optionally) normalised videos per individual that can be imported directly into these tools – where they might perform better than the raw video.…”
Section: Methodsmentioning
confidence: 99%