2016
DOI: 10.1007/978-3-319-46128-1_43
|View full text |Cite
|
Sign up to set email alerts
|

Composite Denoising Autoencoders

Abstract: Abstract. In representation learning, it is often desirable to learn features at different levels of scale. For example, in image data, some edges will span only a few pixels, whereas others will span a large portion of the image. We introduce an unsupervised representation learning method called a composite denoising autoencoder (CDA) to address this. We exploit the observation from previous work that in a denoising autoencoder, training with lower levels of noise results in more specific, fine-grained featur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
6

Relationship

0
6

Authors

Journals

citations
Cited by 10 publications
(15 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…Although DA provides a representation which is supposedly more robust to noise, the learnt parameters are susceptible to the level of noise applied to the original vectors; high‐level noise will harm the learning of robust representations . To overcome this issue, a scheduled DA (ScheDA) has been introduced, in which the network is trained on gradually decreasing noise level for corrupting feature vectors . Initially, a high level of noise forces the network to learn global and coarse‐grained features.…”
Section: Literature Reviewmentioning
confidence: 99%
See 2 more Smart Citations
“…Although DA provides a representation which is supposedly more robust to noise, the learnt parameters are susceptible to the level of noise applied to the original vectors; high‐level noise will harm the learning of robust representations . To overcome this issue, a scheduled DA (ScheDA) has been introduced, in which the network is trained on gradually decreasing noise level for corrupting feature vectors . Initially, a high level of noise forces the network to learn global and coarse‐grained features.…”
Section: Literature Reviewmentioning
confidence: 99%
“…Then, by decreasing the noise levels, finer representations are learnt. Alternatively, c omposite DAs are proposed, in which, at each stage of the training, data with specific noise level is presented to the network, and only a subset of hidden layers is tuned …”
Section: Literature Reviewmentioning
confidence: 99%
See 1 more Smart Citation
“…A DAE is an autoencoder trained to reconstruct the signal that was corrupted with artificial noise. [34][35][36][37] Previously, Alain and Bengio 38 and Nguyen et al 39 used DAE to construct generative models, and pointed out that the output of an optimal DAE is a local mean of the true data density, and the autoencoder error (the difference between its output and input) is a mean shift vector, 40 where the expectation is over all images u and Gaussian noise with standard variance . It should be pointed that the formulation in Equation 2 is resemble to the nonlocal total variation and block-matching-and-3D-filtering (BM3D) prior.…”
Section: Daepmentioning
confidence: 99%
“…Denoising Autoencoder (DAE) [1][2][3][4][5] is an extension of the classical autoencoder [6,7], where feature denoising is key for the autoencoder to generate better features. In contrast to the classic autoencoder, the input vector in DAE is first corrupted by randomly setting some of features to zero.…”
Section: Introductionmentioning
confidence: 99%