2022
DOI: 10.1038/s41467-022-35659-7
|View full text |Cite
|
Sign up to set email alerts
|

Efficient neural codes naturally emerge through gradient descent learning

Abstract: Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradi… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 64 publications
(85 reference statements)
0
11
0
Order By: Relevance
“…This explains why networks trained with different random initializations end up with similar high-variance principal components but different low-variance components, and it explains why measures of network similarity appear to be more meaningful when they are weighted by variance [51, 56]. Furthermore, these empirical ob-servations are consistent with theories of deep learning which argue that networks progressively learn the principal components of their training data and tasks, with the number of learned components increasing as a function of task and data complexity [41, 42, 36, 7, 76, 56].…”
Section: Resultsmentioning
confidence: 56%
“…This explains why networks trained with different random initializations end up with similar high-variance principal components but different low-variance components, and it explains why measures of network similarity appear to be more meaningful when they are weighted by variance [51, 56]. Furthermore, these empirical ob-servations are consistent with theories of deep learning which argue that networks progressively learn the principal components of their training data and tasks, with the number of learned components increasing as a function of task and data complexity [41, 42, 36, 7, 76, 56].…”
Section: Resultsmentioning
confidence: 56%
“…For instance, if the visual system more frequently takes horizontal and vertical orientations as input, it could become more sensitive to those orientations. Thus, the visual system could learn to 'efficiently encode' information in a way that would result in the exact sorts of oblique biases we have described here (see, e.g., Benjamin et al, 2022). If this is truethat these effects arise from the input of the natural environment -it remains unclear why these biases would manifest in modalities separate from that input.…”
Section: Interim Discussionmentioning
confidence: 86%
“…Previous studies have shown that deep neural networks implicitly learn to encode stimulus features as predicted by efficient coding (Benjamin et al, 2022). Here we use "PredNet", a recurrent neural network designed and trained to predict the next frame in a video sequence (Lotter et al, 2016), to test to what degree sensory representations can dynamically change depending on the temporal input statistics.…”
Section: Predictive Neural Networkmentioning
confidence: 99%