2019
DOI: 10.1007/978-981-32-9690-9_6
|View full text |Cite
|
Sign up to set email alerts
|

Reducing Dimensionality of Data Using Autoencoders

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 4 publications
0
3
0
1
Order By: Relevance
“…Even with the reduced number of features, clustering the remaining high-dimensional dataset is problematic due to lack of scalability of clustering methods. To address this problem, we explored dimension reduction methods (see Methods ), and focus on the well-established method PCA, and a more recent approached based on deep-learning using auto encoders (AE) [ 14 ]. An important tuning parameter for these methods is the number of reduced dimensions (i.e., number of PCs, number of AE embeddings).…”
Section: Resultsmentioning
confidence: 99%
“…Even with the reduced number of features, clustering the remaining high-dimensional dataset is problematic due to lack of scalability of clustering methods. To address this problem, we explored dimension reduction methods (see Methods ), and focus on the well-established method PCA, and a more recent approached based on deep-learning using auto encoders (AE) [ 14 ]. An important tuning parameter for these methods is the number of reduced dimensions (i.e., number of PCs, number of AE embeddings).…”
Section: Resultsmentioning
confidence: 99%
“…According to Fournier and Aloise [9], working with a large number of dimensions in the feature space causes the volume of the data space to increase exponentially fast with the dimension and, therefore, the data becomes sparse. This problem, called curse of dimensionality, leads to the problem of overfitting as the machine learning model can easily get an accurate solution due to the data sparseness [10]. In order to tackle this problem, high-dimensional data can be projected into a lower-dimensional or latent space using different linear or non-linear techniques.…”
Section: A Autoencoder-based Dimensionality Reductionmentioning
confidence: 99%
“…AEs are Neural Networks with the same input and output dimensions, which are trained to compress the input into a lower-dimensional space (also called latent space), in such a way that the input can again be recovered from the latent space with minimal error [4]. In the literature, AEs have been used for several applications, including dimensionality reduction [5] and detection of errors and anomalies [6]- [8].…”
Section: A Autoencoders For Snr Estimationmentioning
confidence: 99%