2015
DOI: 10.1109/tgrs.2014.2325067
|View full text |Cite
|
Sign up to set email alerts
|

Spatial-Aware Dictionary Learning for Hyperspectral Image Classification

Abstract: This paper presents a structured dictionary-based model for hyperspectral data that incorporates both spectral and contextual characteristics of a spectral sample, with the goal of hyperspectral image classification. The idea is to partition the pixels of a hyperspectral image into a number of spatial neighborhoods called contextual groups and to model each pixel with a linear combination of a few dictionary elements learned from the data. Since pixels inside a contextual group are often made up of the same ma… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
48
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 119 publications
(48 citation statements)
references
References 53 publications
0
48
0
Order By: Relevance
“…In particular, sparsity based regularization has achieved great success, offering solutions that outperform classical approaches in various image and signal processing applications. Among the others, we can mention inverse problems such as denoising [35,36], reconstruction [22,37], classification [38], recognition [39,40], and compression [41,42]. The underlying assumption of methods based on sparse representation is that signals such as audio and images are naturally generated by a multivariate linear model, driven by a small number of basis or regressors.…”
Section: Sparse Image Representationmentioning
confidence: 99%
“…In particular, sparsity based regularization has achieved great success, offering solutions that outperform classical approaches in various image and signal processing applications. Among the others, we can mention inverse problems such as denoising [35,36], reconstruction [22,37], classification [38], recognition [39,40], and compression [41,42]. The underlying assumption of methods based on sparse representation is that signals such as audio and images are naturally generated by a multivariate linear model, driven by a small number of basis or regressors.…”
Section: Sparse Image Representationmentioning
confidence: 99%
“…The fixed training set is challenging because it is made up of small patches, and most of the patches in an HSI contain no training samples. The proposed method is compared with SVM with Composite Kernel (SVM-CK) [70], SADL [40], Simultaneous Orthogonal Matching Pursuit (SOMP) [70], Learning Sparse-Representation-based Classification with Kernel-smoothed regularization (LSRC-K) [26], MPM-LBP and SMLP-SpATV. The experimental results are given in Table 6, where some results come from the related references (parameters are given in the forth column of the Table 2).…”
Section: Experiments With the Rosis University Of Pavia Datasetmentioning
confidence: 99%
“…The parameters of SVM (linear-kernel parameter , d, and regularization parameter C) are obtained by cross-validation. The parameters of SADL are set in the same way as in [10]. The parameters of SLDL are set according to our experience.…”
Section: Experiments and Analysismentioning
confidence: 99%
“…In order to solve this problem, a spatial-aware dictionary learning (SADL) algorithm was proposed in [6]. The algorithm divides hyperspectral images into groups of multiple rectangles, assuming that the same group of pixels consists of the same material.…”
Section: Introductionmentioning
confidence: 99%