2023
DOI: 10.3390/rs15061713
|View full text |Cite
|
Sign up to set email alerts
|

Nearest Neighboring Self-Supervised Learning for Hyperspectral Image Classification

Abstract: Recently, state-of-the-art classification performance of natural images has been obtained by self-supervised learning (S2L) as it can generate latent features through learning between different views of the same images. However, the latent semantic information of similar images has hardly been exploited by these S2L-based methods. Consequently, to explore the potential of S2L between similar samples in hyperspectral image classification (HSIC), we propose the nearest neighboring self-supervised learning (N2SSL… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2024
2024
2025
2025

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 70 publications
0
2
0
Order By: Relevance
“…The evaluation of the model could be improved by incorporating a wider range of contrastive learning algorithms. BYOL is a generic contrastive learning algorithm, and its advantages over other algorithms have been extensively proved [23,31,51]. Future studies should be performed to develop a specific contrastive learning algorithm in renal pathological image analysis.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The evaluation of the model could be improved by incorporating a wider range of contrastive learning algorithms. BYOL is a generic contrastive learning algorithm, and its advantages over other algorithms have been extensively proved [23,31,51]. Future studies should be performed to develop a specific contrastive learning algorithm in renal pathological image analysis.…”
Section: Discussionmentioning
confidence: 99%
“…Performance improvement: BYOL has demonstrated superior performance [31] and robustness to batch size variations [32] on ImageNet compared to several contrastive learning methods, including SimCLR and MoCo.…”
mentioning
confidence: 99%