Proceedings of the 25th International Conference on Machine Learning - ICML '08 2008
DOI: 10.1145/1390156.1390294
|View full text |Cite
|
Sign up to set email alerts
|

Extracting and composing robust features with denoising autoencoders

Abstract: Previous work has shown that the difficulties in learning deep generative or discriminative models can be overcome by an initial unsupervised learning step that maps inputs to useful intermediate representations. We introduce and motivate a new training principle for unsupervised learning of a representation based on the idea of making the learned representations robust to partial corruption of the input pattern. This approach can be used to train autoencoders, and these denoising autoencoders can be stacked t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

12
3,688
0
22

Year Published

2012
2012
2020
2020

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 5,878 publications
(3,910 citation statements)
references
References 22 publications
12
3,688
0
22
Order By: Relevance
“…5.6.1) them through BP (Sec. 5.5) (Bengio et al, 2007;Vincent et al, 2008;Erhan et al, 2010). Sparse coding (Sec.…”
Section: /7: Ul For Deep Belief Network / Ae Stacks Fine-tuned Bmentioning
confidence: 99%
“…5.6.1) them through BP (Sec. 5.5) (Bengio et al, 2007;Vincent et al, 2008;Erhan et al, 2010). Sparse coding (Sec.…”
Section: /7: Ul For Deep Belief Network / Ae Stacks Fine-tuned Bmentioning
confidence: 99%
“…It creates synthetic entities of the minority class during the model training phase to regularize the prediction models to avoid overfitting and to learn structures representing minority entities. In many ways, SMOTE resembles distortion-based model regularization techniques [34,35]. In this section, we will shortly study the adaptation of the algorithm for LTV prediction.…”
Section: Imbalance In Behavioral Datasets and Synthetic Minority Overmentioning
confidence: 99%
“…The conventional autoencoder with mere reconstruction cost can hardly learn any useful latent representation of the data. To allow autoencoder to discover better representations, many autoencoder regularization schemes such as denoising autoencoders [23] and contractive autoencoders [24] have been proposed. However, these autoencoder variants use only static and temporally uncorrelated object images to learn features for object recognition tasks.…”
Section: Autoencoder With Temporal Slowness Constraintmentioning
confidence: 99%