2020
DOI: 10.1101/2020.05.26.117325
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Anipose: a toolkit for robust markerless 3D pose estimation

Abstract: Quantifying movement is critical for understanding animal behavior. Advances in computer vision now enable markerless tracking from 2D video, but most animals live and move in 3D. Here, we introduce Anipose, a Python toolkit for robust markerless 3D pose estimation. Anipose consists of four components: (1) a 3D calibration module, (2) filters to resolve 2D tracking errors, (3) a triangulation module that integrates temporal and spatial constraints, and (4) a pipeline to structure processing of large numbers of… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
73
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 59 publications
(75 citation statements)
references
References 98 publications
0
73
0
Order By: Relevance
“…Below the touchscreen is a plexiglass panel that allows a free field of view from outside towards the space that contains the head (for face and gaze analysis) and shoulders of the animals. This is useful for eye- and body-tracking systems (Mathis et al, 2018; Karashchuk, 2019; Bala et al, 2020; Sheshadri et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Below the touchscreen is a plexiglass panel that allows a free field of view from outside towards the space that contains the head (for face and gaze analysis) and shoulders of the animals. This is useful for eye- and body-tracking systems (Mathis et al, 2018; Karashchuk, 2019; Bala et al, 2020; Sheshadri et al, 2020).…”
Section: Resultsmentioning
confidence: 99%
“…2C ). This allows monitoring the animals task engagement in the kiosk from outside the housing room and the recording of up to five synchronized camera views which will allow 3D reconstruction of gaze and reach patterns (Karashchuk, 2019; Sheshadri et al, 2020).…”
Section: Methodsmentioning
confidence: 99%
“…Extending SLEAP to tracking in 3D from multiple views may be another the direction of future work, though existing 3D pose tracking methods that build off of 2D predictions can already be configured take advantage of SLEAP [20].…”
Section: Discussionmentioning
confidence: 99%
“…3D pose estimation is typically accomplished by triangulating 2-dimensional (2D) poses acquired using multiple camera views and deep network-based markerless keypoint tracking algorithms [5,6,7,8,9,10,11,12,13]. Notably, triangulation requires that every tracked keypoint be visible from at least two synchronized cameras [14] and that each camera is first calibrated by hand [15,16] or, as in DeepFly3D, by solving a non-convex optimization problem [7]. These expectations are expensive and often difficult to meet, particularly in space-constrained experimental systems that also house sensory stimulation devices [1,2,17].…”
Section: Introductionmentioning
confidence: 99%
“…Notably, triangulation requires that every tracked keypoint, be it a joint or other body feature, be visible from at least two synchronized cameras [14] and that each camera be calibrated. This can be done by hand [15, 16] or, by solving a non-convex optimization problem [7]. These expectations are high and often difficult to meet, particularly in space-constrained experimental systems that also house sensory stimulation devices [1, 2, 17].…”
Section: Introductionmentioning
confidence: 99%