2020
DOI: 10.1109/tcyb.2019.2905793
|View full text |Cite
|
Sign up to set email alerts
|

Dimensionality Reduction of Hyperspectral Imagery Based on Spatial–Spectral Manifold Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
63
0
1

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

1
8

Authors

Journals

citations
Cited by 142 publications
(64 citation statements)
references
References 41 publications
0
63
0
1
Order By: Relevance
“…1) The method uses SVM to classify raw spectral features (SP-SVM). The method called spatial-spectral manifold reconstruction preserving embedding (SSMRPE) for HIS classification [32]. The evaluation indicators used in this article include overall classification accuracy (OA), average classification accuracy (AA) and Kappa coefficient (KC).…”
Section: B Comparison Methods and Evaluation Indicatorsmentioning
confidence: 99%
See 1 more Smart Citation
“…1) The method uses SVM to classify raw spectral features (SP-SVM). The method called spatial-spectral manifold reconstruction preserving embedding (SSMRPE) for HIS classification [32]. The evaluation indicators used in this article include overall classification accuracy (OA), average classification accuracy (AA) and Kappa coefficient (KC).…”
Section: B Comparison Methods and Evaluation Indicatorsmentioning
confidence: 99%
“…Finally, the learned features were fed to SVM for classification. Huang et al [32] first employed a weighted mean filter to filter the image. Then, a spatial-spectral combined distance was used to fuse the spatial and spectral information to select the neighbors of each pixel.…”
Section: Introductionmentioning
confidence: 99%
“…Basically, there are two kinds of dimensionality reduction methods for HSI: feature extraction and feature selection. With the feature space transform, feature extraction can project the original data into a lower dimensional space, using approaches such as the principal component analysis (PCA) [15], [16], independent component analysis (ICA) [17], wavelet transform [18], the manifold learning [19], and the maximum noise fraction (MNF) [20], etc. The resulted data can be assumed to contain most of the spectral and spatial information from the original HSI data.…”
Section: Introductionmentioning
confidence: 99%
“…Wang et al [19] proposed selecting representative features hierarchically by the means of random projection in an end-to-end neural network, which has shown the effectiveness in the large-scale data. Very recently, Huang et al [20] followed the trail of drawbacks of spatial-spectral techniques, and fixed them by designing a new spatial-spectral-combined distance to select spatialspectral neighbors of each HS pixel more effectively. In the combined distance, the pixel-to-pixel distance measurement between two spectral signatures is converted to the weighted summation distance between spatially adjacent spaces of the two target pixels.…”
Section: Introductionmentioning
confidence: 99%