2021
DOI: 10.1101/2021.04.23.441128
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Rich and lazy learning of task representations in brains and neural networks

Abstract: How do neural populations code for multiple, potentially conflicting tasks? Here, we used computational simulations involving neural networks to define “lazy” and “rich” coding solutions to this multitasking problem, which trade off learning speed for robustness. During lazy learning the input dimensionality is expanded by random projections to the network hidden layer, whereas in rich learning hidden units acquire structured representations that privilege relevant over irrelevant features. For context-depende… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

6
29
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 25 publications
(35 citation statements)
references
References 44 publications
6
29
0
Order By: Relevance
“…Synaptic weights were initialised from a zero-centered Gaussian distribution with standard deviation 𝜎 = 𝑔 * √1 𝑓𝑎𝑛_𝑖𝑛 ⁄ where 𝑔 = 0.025 and 𝑔 = 1 in hidden and readout layers respectively. The hidden layer weights were initialised to small values to encourage a lowdimensional ("rich") solution [32]. We employed a training procedure very similar to that used for human subjects.…”
Section: Neural Network Simulationsmentioning
confidence: 99%
“…Synaptic weights were initialised from a zero-centered Gaussian distribution with standard deviation 𝜎 = 𝑔 * √1 𝑓𝑎𝑛_𝑖𝑛 ⁄ where 𝑔 = 0.025 and 𝑔 = 1 in hidden and readout layers respectively. The hidden layer weights were initialised to small values to encourage a lowdimensional ("rich") solution [32]. We employed a training procedure very similar to that used for human subjects.…”
Section: Neural Network Simulationsmentioning
confidence: 99%
“…The representations learned by an ANN can depend strongly on the training regime. Prior research has shown that small alterations to weight initialization parameters can greatly impact the structure of the learned hidden representations in ANNs 21,22 . Specifically, those studies found during a "rich" training regime (in which network initializations had small weight variances), ANNs learned lower-dimensional and structured representations.…”
Section: Compression-then-expansion Of Task Representations In a Feedforward Ann Emerges During Rich Trainingmentioning
confidence: 99%
“…Next, we characterized properties of the learned ANN weights. In line with previous work, we first calculated the Frobenius norm of weights under different weight initializations 21 .…”
Section: Richly Trained Anns Learn Hierarchical Representational Transformationsmentioning
confidence: 99%
“…Thus, the identity and number of highvariance dimensions will likely vary across tasks, as will the basic response features captured by those dimensions. This possibility is supported by network models that perform multiple tasks (Duncker et al, 2020;Flesch et al, 2021;Logiaco et al, 2019) or subtasks (Zimnik and Churchland, 2021). When different tasks require very different dynamics, a very natural way to 'switch' dynamics is to alter the occupied subspace.…”
Section: Task-specific Subspacesmentioning
confidence: 99%