2017
DOI: 10.1101/152884
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inferring single-trial neural population dynamics using sequential auto-encoders

Abstract: Neuroscience is experiencing a data revolution in which simultaneous recording of many hundreds or thousands of neurons is revealing structure in population activity that is not apparent from single-neuron responses. This structure is typically extracted from trial-averaged data. Single-trial analyses are challenging due to incomplete sampling of the neural population, trial-to-trial variability, and fluctuations in action potential timing. Here we introduce Latent Factor Analysis via Dynamical Systems (LFADS)… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

9
412
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 194 publications
(421 citation statements)
references
References 35 publications
(41 reference statements)
9
412
0
Order By: Relevance
“…We evaluate the performance of our DRNN on 12 neural features: High-frequency, Mid-frequency, and Low-frequency Wavelet features (HWT, MWT, LWT); High-frequency, Mid-frequency, and Low-frequency Fourier powers (HFT, MFT, LFT); Latent Factor Analysis via Dynamical Systems (LFADS) features [23]; High-Pass and Low-Pass Filtered (HPF, LPF) data; Threshold Crossings (TCs); Multi-Unit Activity (MUA); and combined MWT and TCs (MWT + TCs) ( Table 1).…”
Section: Deep Multi-state Dynamic Recurrent Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…We evaluate the performance of our DRNN on 12 neural features: High-frequency, Mid-frequency, and Low-frequency Wavelet features (HWT, MWT, LWT); High-frequency, Mid-frequency, and Low-frequency Fourier powers (HFT, MFT, LFT); Latent Factor Analysis via Dynamical Systems (LFADS) features [23]; High-Pass and Low-Pass Filtered (HPF, LPF) data; Threshold Crossings (TCs); Multi-Unit Activity (MUA); and combined MWT and TCs (MWT + TCs) ( Table 1).…”
Section: Deep Multi-state Dynamic Recurrent Neural Networkmentioning
confidence: 99%
“…LFADS is a generalization of variational auto-encoders that can be used to model time-varying aspect of neural signals. Pandarinath et al [23] shows that decoding performance improved when using LFADS to infer smoothed and denoised firing rates. We used LFADS to generate features based on the trial-by-trial threshold crossings from each center-out task.…”
Section: Deep Multi-state Dynamic Recurrent Neural Networkmentioning
confidence: 99%
“…For example, mo-67 tor coordination patterns across muscles (e.g. muscle synergies 68 (31)) and population recordings of M1 neurons in motor cortex 69 (32) all consider how populations of units encode movement 70 through spike rate. Alternatively, spike timing codes may 71 play a role in the coordination of muscles in motor systems.…”
Section: Significance Statementmentioning
confidence: 99%
“…We previously devised a metric for quantifying the relevance of neural signal features to the stimulus from which they were evoked, which we termed feature-learnability (Loutit et al, 2019). This approach uses a simple feedforward back-propagation supervised artificial neural network (ANN), which has several advantages over the use of state-of-the-art deep neural network (DNN) architectures (Pandarinath et al, 2018): An ANN enables the quantification of information content from signal features of interest, that, for example, may be selected on the basis of: i) their neurophysiological relevance, ii) their capacity to mimic or inform electrical stimulation in sensory applications, and/or iii) are commonly used for decoding neural signals. DNNs would not permit us to directly test specific features of interest, but rather, would identify abstract features that may have little identifiable relevance.…”
Section: Introductionmentioning
confidence: 99%