2022
DOI: 10.1101/2022.01.19.476851
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Heterogeneity in Neuronal Dynamics is Learned by Gradient Descent for Temporal Processing Tasks

Abstract: Individual neurons in the brain have complex intrinsic dynamics that are highly diverse. We hypothesize that the complex dynamics produced by networks of complex and heterogeneous neurons may contribute to the brain's ability to process and respond to temporally complex data. To study the role of complex and heterogeneous neuronal dynamics in network computation, we develop a rate-based neuronal model, the generalized-leaky-integrate-and-firing-rate (GLIFR) model, which is a rate-equivalent of the generalized-… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 25 publications
(30 reference statements)
0
5
0
Order By: Relevance
“…Furthermore, we observe that when the activation function γ is initialized homogeneously, the optimization procedure leads to heterogeneity in the activation functions across the network (Fig.3 a top). See Winston et al (2022a) for similar results when AFs are parametrized following known relations between ionic currents and f-I curves. Further experiments (details included in Appendix §B.2) consider trained RNN+ γ networks reading psMNIST digits rotated by π/ 4 rad.…”
Section: Resultsmentioning
confidence: 73%
See 3 more Smart Citations
“…Furthermore, we observe that when the activation function γ is initialized homogeneously, the optimization procedure leads to heterogeneity in the activation functions across the network (Fig.3 a top). See Winston et al (2022a) for similar results when AFs are parametrized following known relations between ionic currents and f-I curves. Further experiments (details included in Appendix §B.2) consider trained RNN+ γ networks reading psMNIST digits rotated by π/ 4 rad.…”
Section: Resultsmentioning
confidence: 73%
“…3a top). See Winston et al (2022a) for similar results when AFs are parametrized following known relations between ionic currents and f-I curves. Further experiments (details included in Appendix §B.2) consider trained RNN+γ networks reading psMNIST digits rotated by π/4 rad.…”
Section: Top-down Optimization Of Adaptive Rnns Recovers Biological D...mentioning
confidence: 73%
See 2 more Smart Citations
“…To address our main question, this study focuses on the effect of various learning rules while holding data, objective function and architecture constant (see [10]). However, these different components can interact, and more sophisticated architecture can facilitate task learning [154][155][156][157][158][159][160][161]. Given the exploding parameter space resulting from such interactions, we believe it requires careful future analysis and is outside of the scope for this one paper.…”
Section: Deep Learning Theory and Its Implicationsmentioning
confidence: 99%