2013 IEEE International Conference on Acoustics, Speech and Signal Processing 2013
DOI: 10.1109/icassp.2013.6639343
|View full text |Cite
|
Sign up to set email alerts
|

Building high-level features using large scale unsupervised learning

Abstract: We consider the problem of building highlevel, class-specific feature detectors from only unlabeled data. For example, is it possible to learn a face detector using only unlabeled images? To answer this, we train a 9-layered locally connected sparse autoencoder with pooling and local contrast normalization on a large dataset of images (the model has 1 billion connections, the dataset has 10 million 200x200 pixel images downloaded from the Internet). We train this network using model parallelism and asynchronou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

7
846
3
10

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 1,226 publications
(866 citation statements)
references
References 27 publications
7
846
3
10
Order By: Relevance
“…ANNs of many hidden layers) through a pretraining phase in which a stack of autoencoders is trained in sequence, each one from the previous (Bengio et al, 2007;Hinton et al, 2006;Le et al, 2012;Marc'Aurelio et al, 2007), leading to a hierarchy of increasingly high-level features. Because it was thought that backpropagation struggles to train networks of many layers directly, pre-training a stack of such autoencoders and then later completing training through e.g.…”
Section: Autoencoders In Deep Learningmentioning
confidence: 99%
See 4 more Smart Citations
“…ANNs of many hidden layers) through a pretraining phase in which a stack of autoencoders is trained in sequence, each one from the previous (Bengio et al, 2007;Hinton et al, 2006;Le et al, 2012;Marc'Aurelio et al, 2007), leading to a hierarchy of increasingly high-level features. Because it was thought that backpropagation struggles to train networks of many layers directly, pre-training a stack of such autoencoders and then later completing training through e.g.…”
Section: Autoencoders In Deep Learningmentioning
confidence: 99%
“…Another appeal of autoencoders is that there are many ways to train them and many tricks to encourage them to produce meaningful features (Ranzato et al, 2006;Le et al, 2012). While RBMs (a kind of probabilistic model) can play a similar role to autoencoders, classic autoencoders in deep learning are generally trained through some form of stochastic gradient descent (Le et al, 2012;Bengio et al, 2007) (like backpropagation), as is the case in this paper.…”
Section: Autoencoders In Deep Learningmentioning
confidence: 99%
See 3 more Smart Citations