The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2010
DOI: 10.1162/neco.2010.05-08-795
|View full text |Cite
|
Sign up to set email alerts
|

Role of Homeostasis in Learning Sparse Representations

Abstract: Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. In… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2012
2012
2024
2024

Publication Types

Select...
3
2
2

Relationship

3
4

Authors

Journals

citations
Cited by 44 publications
(53 citation statements)
references
References 52 publications
(106 reference statements)
0
53
0
Order By: Relevance
“…However, resulting dictionaries vary qualitatively among these schemes and it was unclear which algorithm is the most efficient and what was the individual role of the different mechanisms that constitute SHL schemes. At the learning level, we have shown that the homeostasis mechanism had a great influence on the qualitative distribution of learned filters (Perrinet, 2010).…”
Section: Results: Efficiency Of Different Learning Strategiesmentioning
confidence: 96%
See 2 more Smart Citations
“…However, resulting dictionaries vary qualitatively among these schemes and it was unclear which algorithm is the most efficient and what was the individual role of the different mechanisms that constitute SHL schemes. At the learning level, we have shown that the homeostasis mechanism had a great influence on the qualitative distribution of learned filters (Perrinet, 2010).…”
Section: Results: Efficiency Of Different Learning Strategiesmentioning
confidence: 96%
“…Indeed, given a sparse coding strategy that optimizes any representation efficiency cost as defined above, we may derive an unsupervised learning model by optimizing the dictionary Φ over natural scenes. On the one hand, the flexibility in the definition of the sparseness cost leads to a wide variety of proposed sparse coding solutions (for a review, see (Pece, 2002)) such as numerical optimization (Olshausen and Field, 1997), non-negative matrix factorization (Lee and Seung, 1999;Ranzato et al, 2007) or Matching Pursuit (Perrinet et al, 2004;Smith and Lewicki, 2006;Rehn and Sommer, 2007;Perrinet, 2010). They are all derived from correlation-based inhibition since this is necessary to remove redundancies from the linear representation.…”
Section: Learning To Be Sparse: the Sparsenet Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…When the PC/BC algorithm (with appropriate learning rules) is trained on natural images, it learns a dictionary of basis vectors (i.e., synaptic weights) that resemble the RFs of V1 cells (Spratling, 2012b). Many other algorithms, when trained on natural images, have also been shown to be able to learn basis sets that resemble the RFs of cells in primary visual cortex (e.g., Bell and Sejnowski, 1997;Falconbridge et al, 2006;Hamker and Wiltschut, 2007;Harpur, 1997;Hoyer, 2003Hoyer, , 2004Hoyer and Hyvärinen, 2000;Jehee and Ballard, 2009;Lücke, 2009;Olshausen and Field, 1996;Perrinet, 2010;Ranzato et al, 2007;Rehn and Sommer, 2007;Van Hateren and van der Schaaf, 1998;Weber and Triesch, 2008;Wiltschut and Hamker, 2009). A common feature of all these algorithms is that the learnt representation is sparse.…”
Section: Discussionmentioning
confidence: 99%
“…Globally, this procedure gives us a sequential algorithm for reconstructing the signal using the list of sources (filters with coefficients), which greedily optimizes the 0 pseudo-norm (i.e., achieves a relatively sparse representation given the stopping criterion). The procedure is known as the Matching Pursuit (MP) algorithm [9], which has been shown to generate good approximations for natural images [14]. For this work we made two minor improvements to this method: First, we took advantage of the response of the filters as complex numbers.…”
Section: Supplementary Material: Sparse Coding Algorithmmentioning
confidence: 99%