2023
DOI: 10.1117/1.jrs.17.026509
|View full text |Cite
|
Sign up to set email alerts
|

HFC-SST: improved spatial-spectral transformer for hyperspectral few-shot classification

Abstract: Owing to the complex environment of hyperspectral image (HSI) collecting area, it is difficult to obtain an extensive number of labeled samples for HSI. Recently, many few-shot learning (FSL) algorithms based on convolutional neural network (CNN) have been employed for HSI classification in the scenery of small-scale training samples. However, a CNN-based model is unsuitable for modeling the spatial-spectral information with long-range dependency. The transformer has proved its superiority in modeling the long… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 30 publications
(63 reference statements)
0
3
0
Order By: Relevance
“…Their method achieved state-of-the-art results on few-shot hyperspectral tasks using public datasets, demonstrating the potential of transformers to advance few-shot learning in this domain. Huang et al (2023) also recognized limitations of CNN-based models for few-shot hyperspectral image classification. They highlighted the inherent difficulty of CNNs in effectively capturing long-range spatial-spectral dependencies, especially in scenarios with limited training data.…”
Section: Few-shot Learning In Hyperspectral Images Classificationmentioning
confidence: 98%
“…Their method achieved state-of-the-art results on few-shot hyperspectral tasks using public datasets, demonstrating the potential of transformers to advance few-shot learning in this domain. Huang et al (2023) also recognized limitations of CNN-based models for few-shot hyperspectral image classification. They highlighted the inherent difficulty of CNNs in effectively capturing long-range spatial-spectral dependencies, especially in scenarios with limited training data.…”
Section: Few-shot Learning In Hyperspectral Images Classificationmentioning
confidence: 98%
“…The self-attention mechanism is a typical spatial attention module that has recently become well known in image processing. The self-attention mechanism captures long-range contextual information to obtain discriminative feature representations 54 . The self-attention mechanism works by establishing global dependencies through relationships between input sequences.…”
Section: Related Workmentioning
confidence: 99%
“…The self-attention mechanism captures long-range contextual information to obtain discriminative feature representations. 54 The self-attention mechanism works by establishing global dependencies through relationships between input sequences. Sun et al 28 and Li et al 29 used a self-attention mechanism to suppress the influence of interference pixels.…”
Section: Introductionmentioning
confidence: 99%
“…When the feature types in the classification scenario are very complex, the classification results often have the problem of spatial homogeneity and heterogeneity, for which many solutions have been proposed to incorporate spatial information in the classification [42]- [44]. Su et al [45] proposed a collaborative representational classification model with multifeature fusion dictionary learning (MCRC-DL), which simultaneously considers the spectral, local, global, and morphological features of the data and morphological features, which in turn yields the representation coefficients to determine the predicted class.…”
Section: Introductionmentioning
confidence: 99%