2021
DOI: 10.48550/arxiv.2111.00070
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Deep inference of latent dynamics with spatio-temporal super-resolution using selective backpropagation through time

Abstract: Modern neural interfaces allow access to the activity of up to a million neurons within brain circuits. However, bandwidth limits often create a trade-off between greater spatial sampling (more channels or pixels) and the temporal frequency of sampling. Here we demonstrate that it is possible to obtain spatio-temporal super-resolution in neuronal time series by exploiting relationships among neurons, embedded in latent low-dimensional population dynamics. Our novel neural network training strategy, selective b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Given models trained on fully observed neural observations, we inferred latent factors in the test set that had missing observations. SAE did so by imputing missing observations in the test set to zero as done previously 33,34 , whereas DFINE did so through its new flexible inference method. We then used the inferred factors in the test set to predict behavior variables.…”
Section: Author Contributionsmentioning
confidence: 99%
See 4 more Smart Citations
“…Given models trained on fully observed neural observations, we inferred latent factors in the test set that had missing observations. SAE did so by imputing missing observations in the test set to zero as done previously 33,34 , whereas DFINE did so through its new flexible inference method. We then used the inferred factors in the test set to predict behavior variables.…”
Section: Author Contributionsmentioning
confidence: 99%
“…Because SAE's decoder network is modeled with an RNN, it is structured to take neural observation inputs at every time-step. To handle missing observations for SAE at inference, we imputed them with zeros in the test set as previously done 33,34 , extracted the latent factors, and computed the associated behavior prediction accuracy. DFINE outperformed SAE and, interestingly, this improvement grew larger at lower observed datapoint ratios (Supplementary Fig.…”
Section: Dfine Can Perform Flexible Inference With Missing Observationsmentioning
confidence: 99%
See 3 more Smart Citations