2019
DOI: 10.2991/ijcis.2018.125905686
|View full text |Cite
|
Sign up to set email alerts
|

AEkNN: An AutoEncoder kNN–Based Classifier With Built-in Dimensionality Reduction

Abstract: High dimensionality, i.e. data having a large number of variables, tends to be a challenge for most machine learning tasks, including classification. A classifier usually builds a model representing how a set of inputs explain the outputs. The larger is the set of inputs and/or outputs, the more complex would be that model. There is a family of classification algorithms, known as lazy learning methods, which does not build a model. One of the best known members of this family is the kNN algorithm. Its strategy… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 62 publications
0
9
0
Order By: Relevance
“…Table 2 provides the Bayesian autoencoder topology that is used in the experiments described in this section. The autoencoder is trained, and the compressed or reduced dataset is used with another machine learning model for training to get classification accuracy, analogously to [85,86,87,88,89,90]. Note that we consider two topologies for Swiss Roll: the starred variant does not pass the information through large layers but reduces it directly to two dimensions.…”
Section: Methodsmentioning
confidence: 99%
“…Table 2 provides the Bayesian autoencoder topology that is used in the experiments described in this section. The autoencoder is trained, and the compressed or reduced dataset is used with another machine learning model for training to get classification accuracy, analogously to [85,86,87,88,89,90]. Note that we consider two topologies for Swiss Roll: the starred variant does not pass the information through large layers but reduces it directly to two dimensions.…”
Section: Methodsmentioning
confidence: 99%
“…AEs have been proposed for diverse tasks related to feature fusion [32] and dimensionality reduction to facilitate the learning of canonical classifiers [33]. In astronomy, AEs show a great potential for the processing and storage of large 2 https://www.kaggle.com/c/galaxy-zoo-the-galaxy-challenge datasets of images.…”
Section: A Autoencoders For Feature Extraction On Galaxy Imagesmentioning
confidence: 99%
“…Recent state-of-the-art CNNs are usually composed of a very large number of layers [30] when dealing with challenging image classification problems [31]. Alternatively, DL can also be used to extract features of an image by means of autoencoders (AEs) [32], which have also been proposed to ease the learning of standard classifiers [33]. Whereas CNNs often need to learn the image features from scratch using a large amount of labelled data, AEs enable the encapsulation of the FE process for a particular problem without any need of labels, which can be advantageous for the classification of big collections of images and the use of other kind of classifiers [33].…”
Section: Introductionmentioning
confidence: 99%
“…In other words, if the difference between the features is not large, the accuracy is lowered because the number of errors with TLS is small. K-NN is also susceptible to the curse of dimensionality: if dimensional reduction is applied, higher accuracy will result [35,36].…”
Section: Performance Comparison Of User Recognition Algorithmsmentioning
confidence: 99%