2018
DOI: 10.1088/1751-8121/aaa631
|View full text |Cite
|
Sign up to set email alerts
|

Role of zero synapses in unsupervised feature learning

Abstract: Synapses in real neural circuits can take discrete values including zero (silent or potential) synapses. The computational role of zero synapses in unsupervised feature learning of unlabeled noisy data is still unclear, thus it is important to understand how the sparseness of synaptic activity is shaped during learning and its relationship with receptive field formation. Here, we formulate this kind of sparse feature learning by a statistical mechanics approach. We find that learning decreases the fraction of … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

1
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 9 publications
(6 citation statements)
references
References 30 publications
1
5
0
Order By: Relevance
“…This work revealed a continuous spontaneous symmetry breaking (SSB) transition separating a random-guess phase from a concept-formation phase at a critical value of the amount of provided samples (data size) [16], which is similar to the retarded learning phase transition observed in a generalized Hopfield model of pattern learning [17]. This conclusion is later generalized to RBM with generic priors [18,19], and synapses of ternary values [20]. However, it is still challenging to handle the case of multiple hidden neurons from the perspective of understanding the learning process as a phase transition.…”
Section: Introductionsupporting
confidence: 56%
“…This work revealed a continuous spontaneous symmetry breaking (SSB) transition separating a random-guess phase from a concept-formation phase at a critical value of the amount of provided samples (data size) [16], which is similar to the retarded learning phase transition observed in a generalized Hopfield model of pattern learning [17]. This conclusion is later generalized to RBM with generic priors [18,19], and synapses of ternary values [20]. However, it is still challenging to handle the case of multiple hidden neurons from the perspective of understanding the learning process as a phase transition.…”
Section: Introductionsupporting
confidence: 56%
“…It would therefore be of great interest to characterize the learning curve theoretically in order to understand how this phase is reached. It is also interesting to mention a recent work investigating the role of the diluted weights [66] during the learning in a RBM with one hidden node. In this article, it is shown that the proportion of diluted weights tends to vanish during the learning procedure.…”
Section: Phase Diagram Of the Bernoulli-bernoulli Rbmmentioning
confidence: 99%
“…It would therefore be of great interest to characterize the learning curve theoretically in order to understand how this phase is reached. It is also interesting to mention a recent work investigating the role of the diluted weights [65] during the learning in a RBM with one hidden node. In this article, it is shown that the proportion of diluted weights tends to vanish during the learning procedure.…”
Section: Mean-field Approach the Random-rbmmentioning
confidence: 99%