2023
DOI: 10.1101/2023.03.16.532307
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Keypoint-MoSeq: parsing behavior by linking point tracking to pose dynamics

Abstract: Keypoint tracking algorithms have revolutionized the analysis of animal behavior, enabling investigators to flexibly quantify behavioral dynamics from conventional video recordings obtained in a wide variety of settings. However, it remains unclear how to parse continuous keypoint data into the modules out of which behavior is organized. This challenge is particularly acute because keypoint data is susceptible to high frequency jitter that clustering algorithms can mistake for transitions between behavioral mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(42 citation statements)
references
References 43 publications
0
36
0
Order By: Relevance
“…In the future, we hope to expand our analyses of this extensive dataset to include identification of joint Rastermap/BSOiD (or keypoint-MOSEQ motif “syllable”; Weinreb et al, 2023; bioRxiv) transitions, and to perform further precision neurobehavioral alignment, using other methodologies such as hierarchical state-space models (Lindermann, S; https://github.com/lindermanlab/ssm), and factorial HMMs (Ghahramani and Jordan, 1997). These methods will allow increased accuracy over multiple behavioral timescales and the ability to predict transition or change points between different behavioral and/or neural activity states.…”
Section: Discussionmentioning
confidence: 99%
“…In the future, we hope to expand our analyses of this extensive dataset to include identification of joint Rastermap/BSOiD (or keypoint-MOSEQ motif “syllable”; Weinreb et al, 2023; bioRxiv) transitions, and to perform further precision neurobehavioral alignment, using other methodologies such as hierarchical state-space models (Lindermann, S; https://github.com/lindermanlab/ssm), and factorial HMMs (Ghahramani and Jordan, 1997). These methods will allow increased accuracy over multiple behavioral timescales and the ability to predict transition or change points between different behavioral and/or neural activity states.…”
Section: Discussionmentioning
confidence: 99%
“…We recommend that video be recorded for all trials if only to allow retrospective analysis of those data in the future. There has been an explosion of AI-based methods for extracting key points on animals (Graving et al, 2019;Mathis et al, 2018;Pereira et al, 2022) and algorithms for extracting higher-level animal behaviors from key point (Hsu & Yttri, 2021;Luxem et al, 2022;Weinreb et al, 2023) or raw pixel (Bohnslav et al, 2021) data, and rapid advances are likely to continue. By capturing video before and after all stimuli, users can incorporate their own AI-based methods or manually score behaviors for consideration above and beyond reflex measurements.…”
Section: Discussionmentioning
confidence: 99%
“…To compare SaLSa's performance with an existing model, keypoint-MoSeq (Weinreb et al, 2023) was chosen for the following reasons: first, it overperformed other major models, such as B-SOiD (Hsu and Yttri, 2021) and VAME (Luxem et al, 2022). Second, the implementation is straightforward with a limited set of parameters to explore.…”
Section: Benchmark Testingmentioning
confidence: 99%
“…Over the last decade, a range of approaches has been developed to classify behavioral syllables (Kabra et al, 2013;Pereira et al, 2020;Wiltschko et al, 2020;Dunn et al, 2021;Hsu and Yttri, 2021;Segalin et al, 2021;Jia et al, 2022;Luxem et al, 2022;Harris et al, 2023;Luxem et al, 2023;Weinreb et al, 2023). Since these approaches including SaLSa are applied after body parts detection, they can be applied to videos taken in relatively dark environments as long as body parts are detected reliably.…”
Section: Comparisons To Other Approachesmentioning
confidence: 99%
See 1 more Smart Citation