2021
DOI: 10.48550/arxiv.2105.05382
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Current State and Future Directions for Learning in Biological Recurrent Neural Networks: A Perspective Piece

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Altogether, reservoir computing, and recurrent neural network models in general, open a world of possibilities to provide mechanistic explanations for the way computations take place in real brain networks. By avoiding biologically implausible credit assignments derived from brackpropagation training [34], reservoir computing becomes ideal for this purpose. The general conn2res workflow requires the following parameters to be provided by the user: i) a task name or a supervised learning dataset; ii) a connectome or connectivity matrix; iii) a set of input nodes; iv) a set of readout nodes or modules; v) the type of local dynamics, which can be either spiking neurons, artificial neurons (with a variety of activation functions), or memristive devices (for the simulation of physical reservoirs); and vi) the linear model to be trained in the readout module.…”
Section: The Brain As a Reservoirmentioning
confidence: 99%
See 1 more Smart Citation
“…Altogether, reservoir computing, and recurrent neural network models in general, open a world of possibilities to provide mechanistic explanations for the way computations take place in real brain networks. By avoiding biologically implausible credit assignments derived from brackpropagation training [34], reservoir computing becomes ideal for this purpose. The general conn2res workflow requires the following parameters to be provided by the user: i) a task name or a supervised learning dataset; ii) a connectome or connectivity matrix; iii) a set of input nodes; iv) a set of readout nodes or modules; v) the type of local dynamics, which can be either spiking neurons, artificial neurons (with a variety of activation functions), or memristive devices (for the simulation of physical reservoirs); and vi) the linear model to be trained in the readout module.…”
Section: The Brain As a Reservoirmentioning
confidence: 99%
“…In reservoir networks learning occurs only at the readout connections, and hence the main architecture of the reservoir does not require specific weight calibration, remaining fixed throughout training. This eliminates a confounder while avoiding biologically implausible credit assignment problems such as the use of backpropagation training [34]. These reasons make reservoir Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…It is known that RNNs are of importance in understanding the brain (Prince et al, 2021). It also is known that RNNs are, in theory, Turing Complete (Chung and Siegelmann 2021), and therefore, each point of an RNN's weight-space potentially represents a different possible algorithm.…”
Section: Introductionmentioning
confidence: 99%