Advances in Neural Information Processing Systems 19 2007
DOI: 10.7551/mitpress/7503.003.0105
|View full text |Cite
|
Sign up to set email alerts
|

Efficient sparse coding algorithms

Abstract: Sparse coding provides a class of algorithms for finding succinct representations of stimuli; given only unlabeled input data, it discovers basis functions that capture higher-level features in the data. However, finding sparse codes remains a very difficult computational problem. In this paper, we present efficient sparse coding algorithms that are based on iteratively solving two convex optimization problems: an L 1-regularized least squares problem and an L 2-constrained least squares problem. We propose no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
77
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 1,046 publications
(102 citation statements)
references
References 14 publications
0
77
0
Order By: Relevance
“…Classical dictionary learning techniques (Olshausen and Field, 1997;Lee et al, 2007) consider a finite training set of feature maps, X = (x 1 , x 2 , x n ) in R p×n . In our study, X is the set of MMS features from n surface patches of all the samples.…”
Section: Surface Feature Dimensionality Reductionmentioning
confidence: 99%
See 2 more Smart Citations
“…Classical dictionary learning techniques (Olshausen and Field, 1997;Lee et al, 2007) consider a finite training set of feature maps, X = (x 1 , x 2 , x n ) in R p×n . In our study, X is the set of MMS features from n surface patches of all the samples.…”
Section: Surface Feature Dimensionality Reductionmentioning
confidence: 99%
“…By defining a better lower-dimensional subspace, this information loss can be limited. Sparse coding (Lee et al, 2007;Mairal et al, 2009) has been previously proposed to learn an over-complete set of basis vectors (also called a dictionary) to represent input vectors efficiently and concisely (Donoho and Elad, 2003). Sparse coding has been shown to be effective for many tasks such as image imprinting (Moody et al, 2012), image deblurring (Yin et al, 2008), super-resolution (Yang et al, 2008), classification (Mairal et al, 2009), functional brain connectivity (Lv et al, 2015(Lv et al, , 2017, and structural morphometry analysis (Zhang et al, 2017a).…”
Section: Comparative Analysis Of Pascs-mp Pass-mp and Spharmmentioning
confidence: 99%
See 1 more Smart Citation
“…PCA and factor analysis aim to find a projection that maximizes variance across data points and among the target dimensions, respectively (Ramashini et al, 2019). On the other hand, sparse coding utilizes and learns a dictionary, such that input data can be decomposed into a linear combination of a sparse code and the learned dictionary themselves (Lee et al, 2007). When projecting to a lower number of dimensions, random projection and multidimensional scaling aim at preserving geometric structure within data points (Dasgupta & Freund, 2008).…”
Section: Unsupervised Representation Learningmentioning
confidence: 99%
“…A similar effect can also be achieved by penalizing the representation layer outputs for variation in input, which is referred to as the contractive loss penalty (Rifai et al, 2011). Furthermore, self-supervised training has been applied effectively for representation learning (Raina et al, 2007). In a self-supervised setting, the auto-encoder is trained to map input data to its transformed version at the output, with the transformed output computed through manual processing techniques over the input data.…”
Section: Unsupervised Representation Learningmentioning
confidence: 99%