2018
DOI: 10.1038/s41593-018-0209-y
|View full text |Cite
|
Sign up to set email alerts
|

DeepLabCut: markerless pose estimation of user-defined body parts with deep learning

Abstract: Quantifying behavior is crucial for many applications in neuroscience. Videography provides easy methods for the observation and recording of animal behavior in diverse settings, yet extracting particular aspects of a behavior for further analysis can be highly time consuming. In motor control studies, humans or other animals are often marked with reflective markers to assist with computer-based tracking, but markers are intrusive, and the number and location of the markers must be determined a priori. Here we… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

20
3,360
0
5

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 3,416 publications
(3,769 citation statements)
references
References 37 publications
20
3,360
0
5
Order By: Relevance
“…S4B). This step of the annotation was done automatically in DeepLabCut (DLC, Mathis et al, 2018); to use this approach we trained and evaluated DLC with a data set of 1000 manually annotated video frames (10 flies, 100 exemplary frames each), that were similar to the ones we recorded during the experiments described here. One half (500 frames, 10 flies, 50 frames each) of this set was used for training DLC, the other half was used to evaluate its performance.…”
Section: High Resolution Walking Analysismentioning
confidence: 99%
“…S4B). This step of the annotation was done automatically in DeepLabCut (DLC, Mathis et al, 2018); to use this approach we trained and evaluated DLC with a data set of 1000 manually annotated video frames (10 flies, 100 exemplary frames each), that were similar to the ones we recorded during the experiments described here. One half (500 frames, 10 flies, 50 frames each) of this set was used for training DLC, the other half was used to evaluate its performance.…”
Section: High Resolution Walking Analysismentioning
confidence: 99%
“…On a per-session level including all 21 trials, we have shown that Ehmt1 +/mice explore objects less and show increased memory expression, which is specific to our semantic-like memory condition. Next, we analyzed discrete behaviors on a trial-by-trial level: We trained two classifiers on multiple behavioral features extracted from the video data automatically with deep-learning methods as explained in the Materials and Methods section 18 . We included both general exploration features (e.g.…”
Section: Automatic Behavioural Scoring and Classifier For Wt/ Ehmt1 +/-mentioning
confidence: 99%
“…Automatic classification of genotype based on video analysis is becoming more popular and shows great potential for monitoring treatment outcomes in pre-clinical studies. Both 3D and 2D video techniques can be used 18,19 . Our findings highlight the importance of considering which behaviors are recorded with the video-data: Behaviors with higher cognitive demands (such as our overlapping condition) may be more sensitive to genotype differences as we can show here.…”
Section: General Exploration Differencesmentioning
confidence: 99%
See 2 more Smart Citations