2022
DOI: 10.1101/2022.11.17.516914
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Spatially-embedded recurrent neural networks reveal widespread links between structural and functional neuroscience findings

Abstract: Brain networks exist within the confines of resource limitations. As a result, a brain network must overcome metabolic costs of growing and sustaining the network within its physical space, while simultaneously implementing its required information processing. To observe the effect of these processes, we introduce the spatially-embedded recurrent neural network (seRNN). seRNNs learn basic task-related inferences while existing within a 3D Euclidean space, where the communication of constituent neurons is const… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(13 citation statements)
references
References 107 publications
0
4
0
Order By: Relevance
“…One possibility is that specialized neural clusters shall emerge when agents are trained over many tasks with some shared task variables instead of a single task [42], and then these clusters could be flexibly combined in the face of novel tasks reusing these variables, leading to the potential for holistic agents on par with architecturally modular agents in generalization. It is also suggested that imposing more realistic constraints in optimizing artificial neural networks, e.g., embedding neurons in physical and topological spaces where longer connections are more expensive, can lead to modular clusters [43]. Together, these prompt future investigations into the reconciliation of these two levels of modularity, e.g., studying the generalization of modular neural architectures after being exposed to a diverse task set and more biological constraints.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…One possibility is that specialized neural clusters shall emerge when agents are trained over many tasks with some shared task variables instead of a single task [42], and then these clusters could be flexibly combined in the face of novel tasks reusing these variables, leading to the potential for holistic agents on par with architecturally modular agents in generalization. It is also suggested that imposing more realistic constraints in optimizing artificial neural networks, e.g., embedding neurons in physical and topological spaces where longer connections are more expensive, can lead to modular clusters [43]. Together, these prompt future investigations into the reconciliation of these two levels of modularity, e.g., studying the generalization of modular neural architectures after being exposed to a diverse task set and more biological constraints.…”
Section: Discussionmentioning
confidence: 99%
“…One potential remedy is to introduce more realistic constraints during optimization. For example, embedding neurons in physical and topological spaces where longer connections incur higher costs could encourage clusters to form [49]. In the future, it would be valuable to investigate how to induce networks to learn modular clusters that are comparable to architectural modules.…”
Section: Discussionmentioning
confidence: 99%
“…All data shown in the figures are based on simulations, which are described in the 'Code availability' section. The data files generated with simulations that underlie the figures are available in the CodeOcean capsule belonging to this paper (https://doi.org/10.24433/CO.3539394.v2) 72 .…”
Section: Reporting Summarymentioning
confidence: 99%
“…These enforced structural priors -or pre-optimized reservoir's hyper-parameters -facilitate subsequent learning in multi-task learning contexts [45,112]. More recently, the concept of spatially-embedded recurrent neural networks was introduced [113]. These are recurrent networks with adaptive weights, confined within a 3D Euclidean space, and whose learning is constrained by biological optimization processes, like the minimization of wiring costs or the optimization of interregional communicability, in addition to the maximization of computational performance.…”
Section: The Brain As a Reservoirmentioning
confidence: 99%
“…These are recurrent networks with adaptive weights, confined within a 3D Euclidean space, and whose learning is constrained by biological optimization processes, like the minimization of wiring costs or the optimization of interregional communicability, in addition to the maximization of computational performance. When the pruning of the network is guided by these biological optimization principles, the resulting network architecture displays characteristic features of biological brain networks, including a modular structure with a small-world topology, and the emergence of functionally specialized regions that are spatially co-localized and implement an energetically-efficient, mixed-selective code [80,113]. Altogether, reservoir computing, and recurrent neural network models in general, open a world of possibilities to provide mechanistic explanations for the way computations take place in real brain networks.…”
Section: The Brain As a Reservoirmentioning
confidence: 99%