2022
DOI: 10.1101/2022.11.03.515121
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Facemap: a framework for modeling neural activity based on orofacial tracking

Abstract: Recent studies in mice have shown that orofacial behaviors drive a large fraction of neural activity across the brain. To understand the nature and function of these signals, we need better computational models to characterize the behaviors and relate them to neural activity. Here we developed Facemap, a framework consisting of a keypoint tracking algorithm and a deep neural network encoder for predicting neural activity. We used the Facemap keypoints as input for the deep neural network to predict the activit… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(21 citation statements)
references
References 67 publications
1
13
0
Order By: Relevance
“…2B,C,D). Practitioners often detect outliers using a combination of low confidence and large temporal difference loss [16, 20, 32, 42]. Here we show that the standard approach can be complemented by multi-view and Pose PCA, which capture additional unique outliers.…”
Section: Resultsmentioning
confidence: 83%
See 2 more Smart Citations
“…2B,C,D). Practitioners often detect outliers using a combination of low confidence and large temporal difference loss [16, 20, 32, 42]. Here we show that the standard approach can be complemented by multi-view and Pose PCA, which capture additional unique outliers.…”
Section: Resultsmentioning
confidence: 83%
“…We define the temporal difference loss for each body part as the Euclidean distance between consecutive predictions in pixels. Similar losses have been used by several practitioners to detect outlier predictions post-hoc [16, 32], whereas our goal here, following [34], is to incorporate these penalties directly into network training to achieve more accurate network output. Figure 2B illustrates this penalty: the cartoon in the left panel indicates a jump discontinuity we would like to penalize.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…A locomotion time series was then created by averaging the movement time series of the left and right paw, which was then smoothed with a 0.5 sec (15 frame) sliding window. Videographic recordings during optogenetic stimulation were analyzed using FaceMap 51 . The motion energy was extracted from regions-of-interest (ROIs) placed over the whisker pad of the animal and of the wheel (without any body part visible), to obtain a proxy of whisking and locomotion, respectively.…”
Section: Methodsmentioning
confidence: 99%
“…Whisker pad movement was tracked using the python framework Facemap (Syeda et al, 2022) (https://github.com/MouseLand/facemap). Whisker-pad movement was measured by selecting an ROI over the whisker pad and calculating the motion energy index (MEI) across the video.…”
Section: Calcium Imaging Analysismentioning
confidence: 99%