2022
DOI: 10.1101/2022.08.15.503870
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Flexible multitask computation in recurrent networks utilizes shared dynamical motifs

Abstract: Flexible computation is a hallmark of intelligent behavior. Yet, little is known about how neural networks contextually reconfigure for different computations. Humans are able to perform a new task without extensive training, presumably through the composition of elementary processes that were previously learned. Cognitive scientists have long hypothesized the possibility of a compositional neural code, where complex neural computations are made up of constituent components; however, the neural substrate under… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 41 publications
(26 citation statements)
references
References 51 publications
0
26
0
Order By: Relevance
“…Indeed in a companion paper, we use the same constraints, along with formalising structure/path-integration using group and representations theory, to mathematically understand why grid cells look like grid cells (Dorrell et al, 2022). Similarly, our current understanding is limited to the optimal solution for factorised representations, but we anticipate similar ideas will be applicable to neural dynamics (Driscoll et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed in a companion paper, we use the same constraints, along with formalising structure/path-integration using group and representations theory, to mathematically understand why grid cells look like grid cells (Dorrell et al, 2022). Similarly, our current understanding is limited to the optimal solution for factorised representations, but we anticipate similar ideas will be applicable to neural dynamics (Driscoll et al, 2022).…”
Section: Discussionmentioning
confidence: 99%
“…Non-negativity. Recent work on multitask learning in recurrent neural networks (RNNs) (Yang et al, 2019;Driscoll et al, 2022) demonstrated that neural populations, with a nonnegative activation function, partition themselves into task specific modules (Driscoll et al, 2022). Non-negativity is also important in obtaining hexagonal, not square, grid cells (Dordek et al, 2016;Whittington et al, 2021b;Sorscher et al, 2019;.…”
Section: Disentangling In Brainsmentioning
confidence: 99%
“…While a non-rotational solution may also be possible, constraints relevant to neurobiology (e.g. metabolic efficiency or the dimensionality of outputs) may favor rotational solutions [103, 128, 117, 129]. Notably, we used the generalized idea of rotational transformation during the delay to identify an additional neural prediction distinguishing RNN variants (mean angle change, Fig.…”
Section: Discussionmentioning
confidence: 99%
“…Given these clues, we next sought to identify the underlying implementation explicitly. Importantly, prior work has found that, in trained RNNs, analysis of network-level neural dynamics with respect to fixed points (FPs) can identify dynamical components that have specific functional roles [92, 115, 116, 117] and are jointly sufficient to perform cognitive tasks.…”
Section: A Simple Dynamic and Geometry Implementing Transitive Compar...mentioning
confidence: 99%
“…On each trial, the model receives natural language instructions for the present task embedded through a transformer architecture pre-trained on one of several natural language processing objectives. Both the type of tasks we use and the neural network modeling of such tasks have a rich history in the experimental and computational neuroscience literature respectively [19, 20, 21, 22, 23, 24, 25].…”
Section: Introductionmentioning
confidence: 99%