2008
DOI: 10.1109/tgrs.2008.2001169
|View full text |Cite
|
Sign up to set email alerts
|

Regression Approaches to Small Sample Inverse Covariance Matrix Estimation for Hyperspectral Image Classification

Abstract: Abstract-A key component in most parametric classifiers is the estimation of an inverse covariance matrix. In hyperspectral images the number of bands can be in the hundreds leading to covariance matrices having tens of thousands of elements. Lately, the use of general linear regression models in estimating the inverse covariance matrix have been introduced in the time-series literature. This paper adopts and expands these ideas to ill-posed hyperspectral image classification problems. The results indicate tha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2010
2010
2021
2021

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(9 citation statements)
references
References 19 publications
0
7
0
Order By: Relevance
“…It is impractical and too demanding, however, to model hyperspectral data directly by cokriging [28], since cokriging requires solving (n + 1) · d linear equations for n data points with d dimensions, and the system becomes sensitive to noise due to the greatly increased number of parameters. There is also a broad literature on estimating the covariance matrix when faced with inadequate amounts of training data [47], [48] that can be applied if one desires to learn full covariance matrices. Estimating nonstationary covariance functions using local estimation methods as in [27] could be also considered, but it would significantly increase the complexity of the proposed framework.…”
Section: A Gp-ml Frameworkmentioning
confidence: 99%
“…It is impractical and too demanding, however, to model hyperspectral data directly by cokriging [28], since cokriging requires solving (n + 1) · d linear equations for n data points with d dimensions, and the system becomes sensitive to noise due to the greatly increased number of parameters. There is also a broad literature on estimating the covariance matrix when faced with inadequate amounts of training data [47], [48] that can be applied if one desires to learn full covariance matrices. Estimating nonstationary covariance functions using local estimation methods as in [27] could be also considered, but it would significantly increase the complexity of the proposed framework.…”
Section: A Gp-ml Frameworkmentioning
confidence: 99%
“…Another class of approaches directly produce a regularized estimate of the precision matrix from training samples. Examples in this class include methods based on shrinkage [21]- [23], factor models-based methods [1], [24], regression analysis-based column-by-column methods [25]- [27], and penalized likelihood [28], [29].…”
Section: Introductionmentioning
confidence: 99%
“…While shrinkage is the most common regularization scheme, sparse 15,16 and sparse transform [17][18][19][20] methods have also been proposed. To deal with the non-Gaussian nature of most data, both robust 21 and anti-robust 22 estimators have been proposed.…”
Section: Introductionmentioning
confidence: 99%