2015
DOI: 10.1007/s11760-015-0808-y
|View full text |Cite
|
Sign up to set email alerts
|

Sparse matrix transform-based linear discriminant analysis for hyperspectral image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2017
2017
2023
2023

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 13 publications
(5 citation statements)
references
References 20 publications
0
5
0
Order By: Relevance
“…In the dimensionality reduction process of principal component analysis, accurate estimation of the mutually orthogonal eigenvector matrices is required, and this step is particularly critical in high-dimensional data matrices. Assume that the high-dimensional matrix is represented by X, where X has m vectors, each of which is of dimension p. e covariance matrix of the matrix X is A, and an unbiased estimate of A is S. To improve the accuracy of the maximum likelihood estimation of the orthogonal eigenvector matrix E after the covariance matrix eigendecomposition, Cao and Bouman [20] of Purdue University, USA, proposed a transformation [20,21] that represents the orthogonal transformation matrix as the product of a series of Givens rotations E � G 1 G 2 . .…”
Section: Sparse Matrix Transformationmentioning
confidence: 99%
“…In the dimensionality reduction process of principal component analysis, accurate estimation of the mutually orthogonal eigenvector matrices is required, and this step is particularly critical in high-dimensional data matrices. Assume that the high-dimensional matrix is represented by X, where X has m vectors, each of which is of dimension p. e covariance matrix of the matrix X is A, and an unbiased estimate of A is S. To improve the accuracy of the maximum likelihood estimation of the orthogonal eigenvector matrix E after the covariance matrix eigendecomposition, Cao and Bouman [20] of Purdue University, USA, proposed a transformation [20,21] that represents the orthogonal transformation matrix as the product of a series of Givens rotations E � G 1 G 2 . .…”
Section: Sparse Matrix Transformationmentioning
confidence: 99%
“…LDA is a normally used method for classification [37][38][39]. Suppose a set of N samples {x 1 , x 2 , .…”
Section: Ldamentioning
confidence: 99%
“…PCA has been widely used for HSI analysis to obtain the principal component features. Linear Discriminant Analysis (LDA) is a classic supervised method to obtain strong discriminant features of HSI (Peng and Luo 2016). LDA utilizes the priori class label to maximize the between-class variance and minimize the within-class variance which will separate the inter-class samples and compact the intra-class samples.…”
Section: Introductionmentioning
confidence: 99%