2020
DOI: 10.1101/2020.10.23.352443
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning sparse codes from compressed representations with biologically plausible local wiring constraints

Abstract: Sparse coding is an important method for unsupervised learning of task-independent features in theoretical neuroscience models of neural coding. While a number of algorithms exist to learn these representations from the statistics of a dataset, they largely ignore the information bottlenecks present in fiber pathways connecting cortical areas. For example, the visual pathway has many fewer neurons transmitting visual information to cortex than the number of photoreceptors. Both empirical and analytic results h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(9 citation statements)
references
References 64 publications
3
6
0
Order By: Relevance
“…Random compression matrices are known to have optimal properties, however in many cases structured randomness is more realistic. Recent work has shown that structured random projections with local wiring constraints (in one dimension) was compatible with dictionary learning [25], supporting previous empirical results [5]. Our work shows that structured random receptive fields are equivalent to employing a wavelet dictionary and dense Gaussian projection.…”
Section: Connections To Compressive Sensingsupporting
confidence: 88%
See 3 more Smart Citations
“…Random compression matrices are known to have optimal properties, however in many cases structured randomness is more realistic. Recent work has shown that structured random projections with local wiring constraints (in one dimension) was compatible with dictionary learning [25], supporting previous empirical results [5]. Our work shows that structured random receptive fields are equivalent to employing a wavelet dictionary and dense Gaussian projection.…”
Section: Connections To Compressive Sensingsupporting
confidence: 88%
“…We see similar results across diverse hidden layer widths and learning rates (Fig. [25][26][27][28], with the benefits most evident for wider networks and smaller learning rates. Furthermore, the structured weights show similar results when trained for 10,000 epochs (rate 0.1; 1,000 neurons; not shown) and with other optimizers like minibatch Stochastic Gradient Descent (SGD) and ADAM (batch size 256, rate 0.1; 1,000 neurons; not shown).…”
Section: Network Train Faster When Initialized With Structured Weightssupporting
confidence: 58%
See 2 more Smart Citations
“…, of the inferred codes [15,13,7]. This quantity is minimized when the features have high joint entropy while being independent of one another, denoting a reduction in redundancy.…”
Section: Linear Sparse Coding Performancementioning
confidence: 99%