2014
DOI: 10.1109/lgrs.2013.2290531
|View full text |Cite
|
Sign up to set email alerts
|

Structured Priors for Sparse-Representation-Based Hyperspectral Image Classification

Abstract: Abstract-Pixel-wise classification, where each pixel is assigned to a predefined class, is one of the most important procedures in hyperspectral image (HSI) analysis. By representing a test pixel as a linear combination of a small subset of labeled pixels, a sparse representation classifier (SRC) gives rather plausible results compared with that of traditional classifiers such as the support vector machine (SVM). Recently, by incorporating additional structured sparsity priors, the second generation SRCs have … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 92 publications
(8 citation statements)
references
References 21 publications
(24 reference statements)
0
8
0
Order By: Relevance
“…A sparse modelling dictionary-based approach has been applied in previous HSI classification methods (Sun et al 2014;Castrodad et al 2011;Chen, Nasrabadi, and Tran 2011), where different types of sparsity constraints have been included in the corresponding dictionary modelling cost functions. The general idea behind these methods is to learn a separate dictionary for each class from labelled data (Sun et al 2014;Castrodad et al 2011), or use the labelled data per se to form dictionaries (Chen, Nasrabadi, and Tran 2011), and classify unknown pixels by determining which classspecific dictionary best describes the sample in terms of minimum value of the reconstruction error.…”
Section: Supervised Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…A sparse modelling dictionary-based approach has been applied in previous HSI classification methods (Sun et al 2014;Castrodad et al 2011;Chen, Nasrabadi, and Tran 2011), where different types of sparsity constraints have been included in the corresponding dictionary modelling cost functions. The general idea behind these methods is to learn a separate dictionary for each class from labelled data (Sun et al 2014;Castrodad et al 2011), or use the labelled data per se to form dictionaries (Chen, Nasrabadi, and Tran 2011), and classify unknown pixels by determining which classspecific dictionary best describes the sample in terms of minimum value of the reconstruction error.…”
Section: Supervised Methodsmentioning
confidence: 99%
“…The general idea behind these methods is to learn a separate dictionary for each class from labelled data (Sun et al 2014;Castrodad et al 2011), or use the labelled data per se to form dictionaries (Chen, Nasrabadi, and Tran 2011), and classify unknown pixels by determining which classspecific dictionary best describes the sample in terms of minimum value of the reconstruction error. The aforementioned methods use labelled training samples (or at least assume that the classes present in the data set are known a priori (Castrodad et al 2011)) to obtain class-specific dictionaries, whereas our method uses only unlabelled samples and no prior information to learn a single general dictionary.…”
Section: Supervised Methodsmentioning
confidence: 99%
See 3 more Smart Citations