Advances in Independent Component Analysis and Learning Machines 2015
DOI: 10.1016/b978-0-12-802806-3.00008-7
|View full text |Cite
|
Sign up to set email alerts
|

From neural PCA to deep unsupervised learning

Abstract: A network supporting deep unsupervised learning is presented. The network is an autoencoder with lateral shortcut connections from the encoder to decoder at each level of the hierarchy. The lateral shortcut connections allow the higher levels of the hierarchy to focus on abstract invariant features. While standard autoencoders are analogous to latent variable models with a single layer of stochastic variables, the proposed network is analogous to hierarchical latent variables models. Learning combines denoisin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
82
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 135 publications
(86 citation statements)
references
References 21 publications
0
82
0
Order By: Relevance
“…2). Our model bears some resemblance to the Ladder Network [9] which is also a hierarchical latent variable model where inference results in an autoencoder with skip connections. Our work differs substantially from that work in how inference is performed.…”
Section: Methodsmentioning
confidence: 99%
“…2). Our model bears some resemblance to the Ladder Network [9] which is also a hierarchical latent variable model where inference results in an autoencoder with skip connections. Our work differs substantially from that work in how inference is performed.…”
Section: Methodsmentioning
confidence: 99%
“…The method bears some similarities to the so called Ladder Networks, introduced in [28] and used in the context of semisupervised learning in [29]. However, our method explicitly optimises the combined supervised and unsupervised criteria for each layer of the network independently, whereas, in Ladder Networks the optimisation is performed for all layers conjunctively.…”
Section: Introductionmentioning
confidence: 99%
“…1 In Kingma et al [11], semi-supervised learning was used with deep generative models, showing state-of-the-art performance on the MNIST data set. In Rasmus et al [12], unsupervised Ladder networks [13] were extended by adding a supervised learning component. Their resulting model reached state-of-the-art performance on both MNIST and CIFAR-10.…”
Section: Related Workmentioning
confidence: 99%