2019 IEEE Second International Conference on Artificial Intelligence and Knowledge Engineering (AIKE) 2019
DOI: 10.1109/aike.2019.00044
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Comparison between Autoencoders and Traditional Dimensionality Reduction Methods

Abstract: In order to process efficiently ever-higher dimensional data such as images, sentences, or audio recordings, one needs to find a proper way to reduce the dimensionality of such data. In this regard, SVD-based methods including PCA and Isomap have been extensively used. Recently, a neural network alternative called autoencoder has been proposed and is often preferred for its higher flexibility. This work aims to show that PCA is still a relevant technique for dimensionality reduction in the context of classific… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 43 publications
(43 citation statements)
references
References 4 publications
(6 reference statements)
0
34
0
1
Order By: Relevance
“…The processing period for PCA was two orders of magnitude faster than its equivalents in the neural network. The two auto-encoders had a sufficiently broad dimension [78].…”
Section: Review For Pca Algorithmmentioning
confidence: 99%
“…The processing period for PCA was two orders of magnitude faster than its equivalents in the neural network. The two auto-encoders had a sufficiently broad dimension [78].…”
Section: Review For Pca Algorithmmentioning
confidence: 99%
“…The encoder extracts the important functions hidden in the input data in a compressed form, and the decoder recovers it based on the compressed data from the encoder. In general, as the number of neurons in the hidden layer that extracts important features is smaller than the input, data can be compressed [29]. The DAE is a slight modification of the basic AE learning method to further strengthen the restoration process [30].…”
Section: Denoising Autoencoder (Dae)mentioning
confidence: 99%
“…However, by using PCA beforehand, this limitation can be overcome. A comparison of the respective advantages of AEs in contrast to PCA is given in [47]. The computational complexity depends on the target dimensionality d, the number of iterations i and the number of weights in a neural network w. It is O(in 2 ) for SM, O(inw) for AE, O(imd 3 ) for LLC and MC.…”
Section: Non-linear Non-convex Feature Extractionmentioning
confidence: 99%