2022
DOI: 10.1038/s41592-022-01675-0
|View full text |Cite
|
Sign up to set email alerts
|

A large-scale neural network training framework for generalized estimation of single-trial population dynamics

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

1
29
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(37 citation statements)
references
References 26 publications
1
29
0
Order By: Relevance
“…In Figure 3, inferred firing rates from the KubeFlow trained AutoLFADS model are shown along with conventional firing rate estimation strategies. Qualitatively, these example inferences are similar to those described in Keshtkaran et al (2022), showing similar consistency across trials and resemblance to peristimulus time histograms (PSTH). In Figure 2, we plot the hyperparameter and associated loss values for the KubeFlow based implementation of AutoLFADS to provide a visualization of the PBT based optimization process on these data.…”
Section: Discussionsupporting
confidence: 68%
See 2 more Smart Citations
“…In Figure 3, inferred firing rates from the KubeFlow trained AutoLFADS model are shown along with conventional firing rate estimation strategies. Qualitatively, these example inferences are similar to those described in Keshtkaran et al (2022), showing similar consistency across trials and resemblance to peristimulus time histograms (PSTH). In Figure 2, we plot the hyperparameter and associated loss values for the KubeFlow based implementation of AutoLFADS to provide a visualization of the PBT based optimization process on these data.…”
Section: Discussionsupporting
confidence: 68%
“…One model was trained with the Ray solution and the other with the KubeFlow solution using matching PBT hyperparameters and model configurations to ensure that models of comparable quality can be learned across both solutions. A comprehensive description of the AutoLFADS algorithm and results applying the algorithm to neural data using Ray can be found in Keshtkaran et al (2022). We demonstrate similar converged model performances on metrics relevant to the quality of inferred firing rates in Table 2 (Pei et al, 2021).…”
Section: Discussionmentioning
confidence: 64%
See 1 more Smart Citation
“…We also introduce a metric, state R 2 , which measures the fraction of inferred latent state variance explained by an affine transformation of the true latent states. Despite their success in reconstructing neural activity patterns [17,18,13], we find that RNN-based SAEs require many more latent dimensions than the synthetic systems they are attempting to model. Moreover, we find that the dynamics learned by the RNNs are a poor match to the synthetic systems, in that a large fraction of the models' variance reflects activity not seen in the synthetic system.…”
mentioning
confidence: 86%
“…With the resurgence of deep learning over the past decade, a powerful class of methods has emerged that use RNNs to approximate f [17,18,30,13]. In head-to-head comparisons, RNN-based methods replicate neural activity patterns with substantially higher accuracy than LDSs on datasets from a variety of brain areas and behaviors, suggesting that linear dynamics may not adequately model the dynamics of neural systems [23].…”
Section: Related Workmentioning
confidence: 99%