2017 IEEE Second International Conference on Data Science in Cyberspace (DSC) 2017
DOI: 10.1109/dsc.2017.32
|View full text |Cite
|
Sign up to set email alerts
|

Auto-Encoder Based for High Spectral Dimensional Data Classification and Visualization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…To learn high-level representation from data, the work [221] proposed a combination of multi-layer AEs with maximum noise fraction which reduces the spectral dimensionality of HSI, while a softmax logistic regression classifier is employed for HSIC. The study reported in [222] combined multimanifold learning framework proposed by [223] with Counteractive Autoencoder [224] for improved unsupervised HSIC.…”
Section: Autoencoders (Ae)mentioning
confidence: 99%
“…To learn high-level representation from data, the work [221] proposed a combination of multi-layer AEs with maximum noise fraction which reduces the spectral dimensionality of HSI, while a softmax logistic regression classifier is employed for HSIC. The study reported in [222] combined multimanifold learning framework proposed by [223] with Counteractive Autoencoder [224] for improved unsupervised HSIC.…”
Section: Autoencoders (Ae)mentioning
confidence: 99%
“…Studies show that none of the features individually fulfil the purpose and apparently produced varied data range with wide graphical representations (Table 1 and Figure 2). As it is difficult to visualize and realize a space more than 3 dimensions, the AE has been used to reduce multidimensional space to 2 dimensions without any loss of information [19]. In the latent space, provided by the hidden layers of AE the nonlinearly separable problem has become a linearly separable one.…”
Section: Discussionmentioning
confidence: 99%
“…From input to the hidden layer, having reduced number of nodes, works as encoder i.e., reduces the feature space, followed by increasing number of nodes, working as the decoder. In this work the encoder part of the AE is used to reduce the dimension of the feature space [19,20] without any loss of information regarding classification. The objective of AE is to manipulate the features in the hidden layer in such a way so that X !…”
Section: Reduction Of Feature Space By Auto Encoder (Ae)mentioning
confidence: 99%
“…This ANN was fed by a reduced set of features, tapped from a six-layer DAE. A special form of ANN, DAE [40] can extract effective features from a high-dimensional data in an unsupervised manner. The basic organization of a DAE resembles a funnel shaped structure, with few intermediate layers.…”
Section: E Feature Extraction Using Deep Auto-encoder For Ann Trainingmentioning
confidence: 99%