2021
DOI: 10.1101/2021.06.03.446788
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scalable Bayesian GPFA with automatic relevance determination and discrete noise models

Abstract: Latent variable models are ubiquitous in the exploratory analysis of neural population recordings, where they allow researchers to summarize the activity of large populations of neurons in lower dimensional 'latent' spaces. Existing methods can generally be categorized into (i) Bayesian methods that facilitate flexible incorporation of prior knowledge and uncertainty estimation, but which typically do not scale to large datasets; and (ii) highly parameterized methods without explicit priors that scale better b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
25
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

4
2

Authors

Journals

citations
Cited by 10 publications
(26 citation statements)
references
References 49 publications
1
25
0
Order By: Relevance
“…More recently, approaches based on projecting gradients into subspaces orthogonal to those that are important for previous tasks have been developed in both feedforward and recurrent neural networks. This is consistent with experimental findings that neural dynamics often occupy orthogonal subspaces across contexts in biological circuits (Kaufman et al, 2014;Ames and Churchland, 2019;Failor et al, 2021;Jensen et al, 2021). While these methods have been found to perform well in many continual learning settings, they also suffer from various shortcomings.…”
Section: Introductionsupporting
confidence: 86%
See 1 more Smart Citation
“…More recently, approaches based on projecting gradients into subspaces orthogonal to those that are important for previous tasks have been developed in both feedforward and recurrent neural networks. This is consistent with experimental findings that neural dynamics often occupy orthogonal subspaces across contexts in biological circuits (Kaufman et al, 2014;Ames and Churchland, 2019;Failor et al, 2021;Jensen et al, 2021). While these methods have been found to perform well in many continual learning settings, they also suffer from various shortcomings.…”
Section: Introductionsupporting
confidence: 86%
“…To further investigate how the RNNs solve the continual learning problems and how this relates to the neuroscience literature, we dissected the dynamics of networks trained on the SMNIST task set using the NCL algorithm (see Section H.4 for an equivalent analysis with DOWM). To do this, we analyzed latent representations of the RNN activity trajectories, as is commonly done to study the collective dynamics of artificial and biological networks (Yu et al, 2009;Gallego et al, 2020;Jensen et al, 2020;Mante et al, 2013;Jensen et al, 2021). We considered two consecutive classification tasks, namely classifying 4's vs 5's (k = 2) and classifying 1's vs 7's (k = 3).…”
Section: Dissecting the Dynamics Of Network Trained On The Smnist Tas...mentioning
confidence: 99%
“…This was revealed by binning the angular space into 8 reach directions, temporally segmenting and grouping the inferred firing rates according to the momentary reach direction, and aligning these segments to the time of target onset (Figure 4B). Moreover, hand kinematics could be linearly decoded from the inferred firing rates with high accuracy (Figure 4C; R 2 = 0.75 ± 0.01 over 5 random seeds), on-par with AutoLFADS ( R 2 = 0.76; Keshtkaran et al, 2021), and considerably higher than GPFA and related approaches ( R 2 = 0.6; Jensen et al, 2021).…”
Section: Experiments and Resultsmentioning
confidence: 91%
“…bGPFA learns smoother trajectories. On the other hand, fitting iLQR-VAE with a Gaussian prior with no temporal structure allows to capture more variance in the firing rates, which in turn leads to a better decoding of the kinematics. As a further way of understanding the relative benefits and disadvantages of iLQR-VAE, we compared its performance with bGPFA, a fully Bayesian extension of GPFA (Yu et al, 2009) that enables the use of non-Gaussian likelihoods, scales to very large datasets, and was recently shown to outperform standard GPFA on this same continuous reaching dataset (Jensen et al, 2021). Importantly, bGPFA makes different assumptions to iLQR-VAE, as it places a smooth prior directly on the latents with no explicit notion of dynamics.…”
Section: Additional Related Workmentioning
confidence: 99%
See 1 more Smart Citation