The 2nd International Electronic Conference on Geosciences 2019
DOI: 10.3390/iecg2019-06209
|View full text |Cite
|
Sign up to set email alerts
|

Comparison and Evaluation of Dimensionality Reduction Techniques for Hyperspectral Data Analysis

Abstract: Hyperspectral datasets provide explicit ground covers with hundreds of bands. Filtering contiguous hyperspectral datasets potentially discriminates surface features. Therefore, in this study, a number of spectral bands are minimized without losing original information through a process known as dimensionality reduction (DR). Redundant bands portray the fact that neighboring bands are highly correlated, sharing similar information. The benefits of utilizing dimensionality reduction include the ability to slacke… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(9 citation statements)
references
References 6 publications
(7 reference statements)
0
9
0
Order By: Relevance
“…Each of these dimensional reduction techniques works on a unique principle and has its own advantages and disadvantages. Researchers use single transformation techniques (PCA, MNF, ICA) or hybrid combinations of these techniques (PCA and ICA transforms performed on the MNF components) before classification, then compare the results [60], emphasizing that the transformation e.g., MNF allowed for the extraction of the most informative data, with high signal-to-noise ratio (SNR), especially in the first MNF bands [60,61]. In the case of PCA, bands are delivered as optimal data due to its maximized total variance from the original images [62].…”
Section: Acronymmentioning
confidence: 99%
See 1 more Smart Citation
“…Each of these dimensional reduction techniques works on a unique principle and has its own advantages and disadvantages. Researchers use single transformation techniques (PCA, MNF, ICA) or hybrid combinations of these techniques (PCA and ICA transforms performed on the MNF components) before classification, then compare the results [60], emphasizing that the transformation e.g., MNF allowed for the extraction of the most informative data, with high signal-to-noise ratio (SNR), especially in the first MNF bands [60,61]. In the case of PCA, bands are delivered as optimal data due to its maximized total variance from the original images [62].…”
Section: Acronymmentioning
confidence: 99%
“…In the case of PCA, bands are delivered as optimal data due to its maximized total variance from the original images [62]. According to researchers, MNF and PCA bands have great impact for an improvement of classification results [61,63] because PCA techniques work on the data variance, and MNF sorts the information based on the signal-to-noise ratio [60,[64][65][66]. In the case of the AISA data, the first 15 MNF bands were highly decorrelated and used for classification, and in the case of MNF, there were the first four bands from the Sentinel-2 images (Table 4) and, respectively, four AISA's PCA and three Sentinel-2's PCA bands (Table 4).…”
Section: Acronymmentioning
confidence: 99%
“…Dimensionality reduction was performed on the normalised dataset using the 't-Distributed Stochastic Neighbor Embedding (t-SNE)' (Van Der Maaten et al, 2014;Van Der Maaten and Hinton, 2008) algorithm which, like Principal Component Analysis (PCA) (Jolliffe, 2010), allows for dimensionality reduction of high-dimensional objects into low-dimensional points using feature extraction. This simplified the dimensionality by reducing the 'noise' (Priyadarshini et al, 2019) or the redundant similarities within the dataset, allowing the clustering algorithm to reveal the 'essence' therein and to cluster all observations on the basis of their significant differences. The t-SNE projected the high-dimensional data (the 24-hour features) into low-dimensional space, creating a two-dimensional output, by modelling the probability distribution of neighbouring data points in the dataset while preserving "as much of the significant structure of the high-dimensional data as possible in the low-dimensional map" (Van Der Maaten and Hinton, 2008).…”
Section: Dimensionality Reductionmentioning
confidence: 99%
“…Dimensionality reduction techniques allow to handle the multi-dimensionality nature of hyperspectral data by removing the redundancy of noisy data without losing the original information content and in some cases increasing final classification accuracy [74]. Many studies can be found in literature for applying such techniques to HS data and to the simultaneous exploitation of HS and LiDAR, also integrated with the use of Machine Learning (ML), in particular Random Forest (RF) and/or Support Vector Machines (SVM) [62,63].…”
Section: Feature Selection and The Recursive Feature Elimination-randmentioning
confidence: 99%