Abstract:Neurons in the input layer of primary visual cortex in primates develop edge-like receptive fields. One approach to understanding the emergence of this response is to state that neural activity has to efficiently represent sensory data with respect to the statistics of natural scenes. Furthermore, it is believed that such an efficient coding is achieved using a competition across neurons so as to generate a sparse representation, that is, where a relatively small number of neurons are simultaneously active. In… Show more
“…However, resulting dictionaries vary qualitatively among these schemes and it was unclear which algorithm is the most efficient and what was the individual role of the different mechanisms that constitute SHL schemes. At the learning level, we have shown that the homeostasis mechanism had a great influence on the qualitative distribution of learned filters (Perrinet, 2010).…”
Section: Results: Efficiency Of Different Learning Strategiesmentioning
confidence: 96%
“…Indeed, given a sparse coding strategy that optimizes any representation efficiency cost as defined above, we may derive an unsupervised learning model by optimizing the dictionary Φ over natural scenes. On the one hand, the flexibility in the definition of the sparseness cost leads to a wide variety of proposed sparse coding solutions (for a review, see (Pece, 2002)) such as numerical optimization (Olshausen and Field, 1997), non-negative matrix factorization (Lee and Seung, 1999;Ranzato et al, 2007) or Matching Pursuit (Perrinet et al, 2004;Smith and Lewicki, 2006;Rehn and Sommer, 2007;Perrinet, 2010). They are all derived from correlation-based inhibition since this is necessary to remove redundancies from the linear representation.…”
Section: Learning To Be Sparse: the Sparsenet Algorithmmentioning
confidence: 99%
“…The parameters of this homeostatic rule have a great importance for the convergence of the global algorithm. In (Perrinet, 2010), we have derived a more general homeostasis mechanism derived from the optimization of the representation efficiency through histogram equalization which we will describe later (see Section 14.4.1).…”
Section: Learning To Be Sparse: the Sparsenet Algorithmmentioning
“…However, resulting dictionaries vary qualitatively among these schemes and it was unclear which algorithm is the most efficient and what was the individual role of the different mechanisms that constitute SHL schemes. At the learning level, we have shown that the homeostasis mechanism had a great influence on the qualitative distribution of learned filters (Perrinet, 2010).…”
Section: Results: Efficiency Of Different Learning Strategiesmentioning
confidence: 96%
“…Indeed, given a sparse coding strategy that optimizes any representation efficiency cost as defined above, we may derive an unsupervised learning model by optimizing the dictionary Φ over natural scenes. On the one hand, the flexibility in the definition of the sparseness cost leads to a wide variety of proposed sparse coding solutions (for a review, see (Pece, 2002)) such as numerical optimization (Olshausen and Field, 1997), non-negative matrix factorization (Lee and Seung, 1999;Ranzato et al, 2007) or Matching Pursuit (Perrinet et al, 2004;Smith and Lewicki, 2006;Rehn and Sommer, 2007;Perrinet, 2010). They are all derived from correlation-based inhibition since this is necessary to remove redundancies from the linear representation.…”
Section: Learning To Be Sparse: the Sparsenet Algorithmmentioning
confidence: 99%
“…The parameters of this homeostatic rule have a great importance for the convergence of the global algorithm. In (Perrinet, 2010), we have derived a more general homeostasis mechanism derived from the optimization of the representation efficiency through histogram equalization which we will describe later (see Section 14.4.1).…”
Section: Learning To Be Sparse: the Sparsenet Algorithmmentioning
“…When the PC/BC algorithm (with appropriate learning rules) is trained on natural images, it learns a dictionary of basis vectors (i.e., synaptic weights) that resemble the RFs of V1 cells (Spratling, 2012b). Many other algorithms, when trained on natural images, have also been shown to be able to learn basis sets that resemble the RFs of cells in primary visual cortex (e.g., Bell and Sejnowski, 1997;Falconbridge et al, 2006;Hamker and Wiltschut, 2007;Harpur, 1997;Hoyer, 2003Hoyer, , 2004Hoyer and Hyvärinen, 2000;Jehee and Ballard, 2009;Lücke, 2009;Olshausen and Field, 1996;Perrinet, 2010;Ranzato et al, 2007;Rehn and Sommer, 2007;Van Hateren and van der Schaaf, 1998;Weber and Triesch, 2008;Wiltschut and Hamker, 2009). A common feature of all these algorithms is that the learnt representation is sparse.…”
Algorithms that encode images using a sparse set of basis functions have previously been shown to explain aspects of the physiology of primary visual cortex (V1), and have been used for applications such as image compression, restoration, and classification. Here, a sparse coding algorithm, that has previously been used to account of the response properties of orientation tuned cells in primary visual cortex, is applied to the task of perceptually salient boundary detection. The proposed algorithm is currently limited to using only intensity information at a single scale. However, it is shown to out-perform the current state-of-the-art image segmentation method (Pb) when this method is also restricted to using the same information.
“…Globally, this procedure gives us a sequential algorithm for reconstructing the signal using the list of sources (filters with coefficients), which greedily optimizes the 0 pseudo-norm (i.e., achieves a relatively sparse representation given the stopping criterion). The procedure is known as the Matching Pursuit (MP) algorithm [9], which has been shown to generate good approximations for natural images [14]. For this work we made two minor improvements to this method: First, we took advantage of the response of the filters as complex numbers.…”
Natural images follow statistics inherited by the structure of our physical (visual) environment. In particular, a prominent facet of this structure is that images can be described by a relatively sparse number of features. We designed a sparse coding algorithm biologically-inspired by the architecture of the primary visual cortex. We show here that coefficients of this representation exhibit a heavy-tailed distribution. For each image, the parameters of this distribution characterize sparseness and vary from image to image. To investigate the role of this sparseness, we designed a new class of random textured stimuli with a controlled sparseness value inspired by our measurements on natural images. Then, we provide with a method to synthesize random textures images with a given statistics for sparseness that matches that of some given class of natural images and provide perspectives for their use in neurophysiology.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.