2021
DOI: 10.3390/app11188670
|View full text |Cite
|
Sign up to set email alerts
|

Contrastive Learning Based on Transformer for Hyperspectral Image Classification

Abstract: Recently, deep learning has achieved breakthroughs in hyperspectral image (HSI) classification. Deep-learning-based classifiers require a large number of labeled samples for training to provide excellent performance. However, the availability of labeled data is limited due to the significant human resources and time costs of labeling hyperspectral data. Unsupervised learning for hyperspectral image classification has thus received increasing attention. In this paper, we propose a novel unsupervised framework b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 36 publications
(15 citation statements)
references
References 16 publications
0
15
0
Order By: Relevance
“…Furthermore, our proposed method incorporates spatial relations to the contrastive objective. As a result, the proposed ConGCN can produce more effective feature representations than ResNet-50 in [12] and the transformer model in [13]. The advantage of our ConGCN has also been empirically demonstrated in Section VII.…”
Section: Introductionmentioning
confidence: 81%
See 1 more Smart Citation
“…Furthermore, our proposed method incorporates spatial relations to the contrastive objective. As a result, the proposed ConGCN can produce more effective feature representations than ResNet-50 in [12] and the transformer model in [13]. The advantage of our ConGCN has also been empirically demonstrated in Section VII.…”
Section: Introductionmentioning
confidence: 81%
“…To our best knowledge, there have been two works [12], [13] employing contrastive learning for HSI classification. However, they simply use the traditional paradigm of contrastive learning by utilizing unlabeled examples for pretraining and fine-tuning the model with few labeled examples.…”
Section: Introductionmentioning
confidence: 99%
“…We can see our model also gets state-of-the-art performance with the highest OA and highest AA. It achieves the best performance for overlapping data (such as classes 10,11,12,13, where other methods may not distinguish well). Our method gets the highest performance from contrastive features.…”
Section: Detailed Contrastive Learning Experiments Of Other Datasetsmentioning
confidence: 99%
“…Oppositely, contrastive learning pays more attention to semantic information but ignores context details. By combining representation learning and contrastive learning, methods such as ContrastNet [2], Transformer based contrastive learning [11], Spatial-Spectral Clustering methods [12] show a remarkably potential of reducing dependency on datasets and achieving state-of-the-art performance. However, their feature's representative ability still could be improved.…”
Section: Introductionmentioning
confidence: 99%
“…Its key idea is to conduct a discriminative learning approach to learn encoded feature representations, in which similar sample pairs remain close together, whereas different sample pairs remain widely apart. It has been successfully verified in many computer vision tasks such as image classification [ 17 ] and human activity recognition [ 18 , 19 ].…”
Section: Introductionmentioning
confidence: 99%