2008
DOI: 10.1162/neco.2008.03-07-486
|View full text |Cite
|
Sign up to set email alerts
|

Sparse Coding via Thresholding and Local Competition in Neural Circuits

Abstract: While evidence indicates that neural systems may be employing sparse approximations to represent sensed stimuli, the mechanisms underlying this ability are not understood. We describe a locally competitive algorithm (LCA) that solves a collection of sparse coding principles minimizing a weighted combination of mean-squared error and a coefficient cost function. LCAs are designed to be implemented in a dynamical system composed of many neuron-like elements operating in parallel. These algorithms use thresholdin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

2
386
0

Year Published

2009
2009
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 358 publications
(388 citation statements)
references
References 28 publications
2
386
0
Order By: Relevance
“…This method can be interpreted as a special case of PSD [12,13] and sparse coding [14] when inference of features is computed in just one step. Unlike RBM's objective function which is intractable, this method can be optimized very efficiently and it enjoys the use of ReLU units because they naturally produce sparse features.…”
Section: Unsupervised Learningmentioning
confidence: 99%
“…This method can be interpreted as a special case of PSD [12,13] and sparse coding [14] when inference of features is computed in just one step. Unlike RBM's objective function which is intractable, this method can be optimized very efficiently and it enjoys the use of ReLU units because they naturally produce sparse features.…”
Section: Unsupervised Learningmentioning
confidence: 99%
“…as 't' varies from 0 to ∞ the minimizer of the above cost function traces through the central path that contains the unique optimal solution (x * , y * ) for (10). So, by gradually increasing the value of 't' one can update the values of (x, y) to get closer to the optimal solution.…”
Section: Interior Point Methods With Barrier Functionmentioning
confidence: 99%
“…In models for sparse coding in the primary visual cortex (V1), the receptive fields of simple cells in the input layer of V1 serve as the basis from which a sparse representation of the visual environment is constructed. Recently, a number of neurally plausible mechanisms for sparse coding were developed [1], yielding testable predictions of cortico-cortical interactions of cells within a population by inducing local competition to minimize the total number of active cells in the network. Although this algorithmic framework provides an explicit mechanism for simple cell sparse coding, many still contend that current models for sparse coding are not plausible due to the overhead required to transmit both the locations and spike rates of all active cells within the population to higher areas in the cortex.…”
mentioning
confidence: 99%
“…The function of this network is still not known, however, studies of information flow in the cortex suggest that cells in these upper layers may provide a means for competition to occur amongst neighboring columns. To obtain a sketch from our model of this hypercolumn, we employed locally competitive algorithms for sparse coding [1] within each orientation minicolumn; each of these approximations are then fed forward to the fusion network which solves a sparse approximation problem to determine the minimum number of minicolumns needed to effectively represent the stimulus. We call the sparse weight vector that emerges from this computation our population sketch.…”
mentioning
confidence: 99%
See 1 more Smart Citation