2019
DOI: 10.3390/rs11091039
|View full text |Cite
|
Sign up to set email alerts
|

Dimensionality Reduction of Hyperspectral Image Using Spatial-Spectral Regularized Sparse Hypergraph Embedding

Abstract: Many graph embedding methods are developed for dimensionality reduction (DR) of hyperspectral image (HSI), which only use spectral features to reflect a point-to-point intrinsic relation and ignore complex spatial-spectral structure in HSI. A new DR method termed spatial-spectral regularized sparse hypergraph embedding (SSRHE) is proposed for the HSI classification. SSRHE explores sparse coefficients to adaptively select neighbors for constructing the dual sparse hypergraph. Based on the spatial coherence prop… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…Then, the nearest neighbor classifier (NN) [55] was used for classification. After that, the classification accuracy of each class (CA), overall classification accuracy (OA), average classification accuracy (AA) and kappa coeffcient (KC) were adopted to evaluate the performance of different DR methods [47,52]. Among them, CA is the classification accuracy on each class of land covers, AA is a measure of the mean value of the classification accuracies of all classes, OA refers to the number of correctly classified instances divided by the total number of test samples, KC is a statistical measurement of consistency between the ground truth map and the final classification map.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Then, the nearest neighbor classifier (NN) [55] was used for classification. After that, the classification accuracy of each class (CA), overall classification accuracy (OA), average classification accuracy (AA) and kappa coeffcient (KC) were adopted to evaluate the performance of different DR methods [47,52]. Among them, CA is the classification accuracy on each class of land covers, AA is a measure of the mean value of the classification accuracies of all classes, OA refers to the number of correctly classified instances divided by the total number of test samples, KC is a statistical measurement of consistency between the ground truth map and the final classification map.…”
Section: Methodsmentioning
confidence: 99%
“…According to the spatial distribution consistency of HSI, the pixels are generally distributed in blocks, such as Soil, Water, Building and Woods [51]. Therefore, the neighboring pixels in a spatial window are more likely belonging to the same class and they lie on the same manifold [52]. To utilize the spatial information in HSI, a spatial-domain weighted intramanifold scatter matrix is designed to characterize the similarity relationship for each submanifold and a spatial-domain weighted intermanifold scatter matrix is defined to represent the dissimilarity relationship between different submanifolds.…”
Section: Spatial-domain Multi-manifold Analysis Modelmentioning
confidence: 99%
“…56 Hypergraph learning methods have also been introduced to explore the multiple adjacency relationship in hyperspectral data and discover the complex geometric structure between hyperspectral images. 57 Discriminant hyper-Laplacian projection, 57 semisupervise hypergraph embedding, 58 local pixel NPE, 59 and spatial-spectral regularized sparse hypergraph embedding 57 are some types of graph embedding methods for dimensionality reduction of hyperspectral images.…”
Section: Dimensionality Processingmentioning
confidence: 99%
“…In recent years, scholars have put forward many DR methods, which can be divided into the following two categories: linear dimensionality reduction (LDR) algorithms and manifold dimensionality reduction (MDR) algorithms [ 6 ]. The former includes principal component analysis (PCA) [ 7 ], linear discriminant analysis (LDA) [ 8 ], and independent component analysis (ICA) [ 9 ], and so on. These methods project images to the low-dimensional space by linear transformation and find the optimal transformation projection.…”
Section: Introductionmentioning
confidence: 99%