2022
DOI: 10.48550/arxiv.2203.07436
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

SuperAnimal models pretrained for plug-and-play analysis of animal behavior

Abstract: Animal pose estimation is critical in applications ranging from life science research, agriculture, to veterinary medicine. Compared to human pose estimation, the performance of animal pose estimation is limited by the size of available datasets and the generalization of a model across datasets. Typically different keypoints are labeled regardless of whether the species are the same or not, leaving animal pose datasets to have disjoint or partially overlapping keypoints. As a consequence, a model cannot be use… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

2
3

Authors

Journals

citations
Cited by 12 publications
(12 citation statements)
references
References 49 publications
0
8
0
Order By: Relevance
“…The first dataset included conventional 2D videos of a single mouse behaving in an open field, with human annotations for four commonly occurring behaviors (locomote, rear, face groom and body groom) (Fig 6a-c). To identify keypoints in this dataset we used DeepLabCut, specifically the TopViewMouse SuperAnimal network from the DLC Model Zoo 31 , which automatically identifies keypoints without the need for annotation data or training. The second dataset (part of the CalMS21 benchmark 30 ) included a set of three manually annotated social behaviors (mounting, investigation, and attack) as well as keypoints for a pair of interacting mice (Fig 6d-f).…”
Section: Resultsmentioning
confidence: 99%
“…The first dataset included conventional 2D videos of a single mouse behaving in an open field, with human annotations for four commonly occurring behaviors (locomote, rear, face groom and body groom) (Fig 6a-c). To identify keypoints in this dataset we used DeepLabCut, specifically the TopViewMouse SuperAnimal network from the DLC Model Zoo 31 , which automatically identifies keypoints without the need for annotation data or training. The second dataset (part of the CalMS21 benchmark 30 ) included a set of three manually annotated social behaviors (mounting, investigation, and attack) as well as keypoints for a pair of interacting mice (Fig 6d-f).…”
Section: Resultsmentioning
confidence: 99%
“…Semi-supervised learning is not the only technique that enables improvements over standard supervised learning protocols. First, it has been suggested that supervised pose estimation networks can be improved by pretraining them on large labeled datasets for image classification [9] or pose estimation [47], to an extent that might eliminate dataset-specific training [48]. Other work avoids pretraining altogether by using lighter architectures [10].…”
Section: Discussionmentioning
confidence: 99%
“…In our final evaluation, we sought to benchmark DAMM against an existing method for mouse localization, the SuperAnimal-TopViewMouse model released by DeepLabCut (SA-DLC) (16). This tool predicts body keypoint trajectories and is aimed at generalization.…”
Section: Cmentioning
confidence: 99%
“…For the comparison to the SuperAnimal-TopViewMouse model released by DeepLabCut (16), we used predictions aggregated over scales [200,300,400,500,600] which was the only hyperparameter selected by the end-user. To construct a bounding box that is used to approximate a bounding box localization, we compute the tightest box encompassing all points, while excluding all tail points.…”
Section: Model Selection and Trainingmentioning
confidence: 99%