2021
DOI: 10.1101/2021.10.07.463540
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data

Abstract: Understanding how neural dynamics give rise to behaviour is one of the most fundamental questions in systems neuroscience. To achieve this, a common approach is to record neural populations in behaving animals, and model these data as emanating from a latent dynamical system whose state trajectories can then be related back to behavioural observations via some form of decoding. As recordings are typically performed in localized circuits that form only a part of the wider implicated network, it is important to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(18 citation statements)
references
References 70 publications
0
17
0
Order By: Relevance
“…Here, we address the challenge of preferential modeling of neural-behavioral data with measured inputs, which has been unresolved. For non-preferential modeling of neural data on its own and when inputs are not measured, prior studies have looked at the distinct problem of separating the recorded neural dynamics into intrinsic dynamics and a dynamic input that is inferred 12,56,57 . This decomposition is typically done by making certain a-priori assumptions about the input such that inputs can be inferred, for example that input is constrained to be considerably less dynamic than intrinsic neural dynamics, or that input is sparse or spatiotemporally independent 12,56 .…”
Section: Discussionmentioning
confidence: 99%
“…Here, we address the challenge of preferential modeling of neural-behavioral data with measured inputs, which has been unresolved. For non-preferential modeling of neural data on its own and when inputs are not measured, prior studies have looked at the distinct problem of separating the recorded neural dynamics into intrinsic dynamics and a dynamic input that is inferred 12,56,57 . This decomposition is typically done by making certain a-priori assumptions about the input such that inputs can be inferred, for example that input is constrained to be considerably less dynamic than intrinsic neural dynamics, or that input is sparse or spatiotemporally independent 12,56 .…”
Section: Discussionmentioning
confidence: 99%
“…A previous application of such models has led to the discovery of line attractor dynamics in the hypothalamus of mice during decisions underlying aggression 43 . In combination with methods from control theory, LDS can also be used to infer inputs that are optimal for a given task, like bringing brain activity into healthy regimes in biomedical applications 44 or optimally configuring cortical dynamics during movement preparation 3739 . Here, we found that our fitted LDS models are fully controllable 37 (data not shown), and applied methods from control theory to identify the most amplifying dimensions of the dynamics 22 , but an exhaustive analysis of this type is beyond the scope of our study.…”
Section: Discussionmentioning
confidence: 99%
“…A previous application of such models has led to the discovery of line attractor dynamics in the hypothalamus of mice during decisions underlying aggression 43 . In combination with methods from control theory, LDS can also be used to infer inputs that are optimal for a given task, like bringing brain activity into healthy regimes in biomedical applications 44 or optimally configuring cortical dynamics during movement preparation [37][38][39] .…”
Section: Discussionmentioning
confidence: 99%
“…We also introduce a metric, state R 2 , which measures the fraction of inferred latent state variance explained by an affine transformation of the true latent states. Despite their success in reconstructing neural activity patterns [17,18,13], we find that RNN-based SAEs require many more latent dimensions than the synthetic systems they are attempting to model. Moreover, we find that the dynamics learned by the RNNs are a poor match to the synthetic systems, in that a large fraction of the models' variance reflects activity not seen in the synthetic system.…”
mentioning
confidence: 86%
“…With the resurgence of deep learning over the past decade, a powerful class of methods has emerged that use RNNs to approximate f [17,18,30,13]. In head-to-head comparisons, RNN-based methods replicate neural activity patterns with substantially higher accuracy than LDSs on datasets from a variety of brain areas and behaviors, suggesting that linear dynamics may not adequately model the dynamics of neural systems [23].…”
Section: Related Workmentioning
confidence: 99%