2022
DOI: 10.1109/mgrs.2022.3198244
|View full text |Cite
|
Sign up to set email alerts
|

Self-Supervised Learning in Remote Sensing: A review

Abstract: In deep learning research, self-supervised learning (SSL) has received great attention triggering interest within both the computer vision and remote sensing communities. While there has been a big success in computer vision, most of the potential of SSL in the domain of earth observation remains locked. In this paper, we provide an introduction to, and a review of the concepts and latest developments in SSL for computer vision in the context of remote sensing. Further, we provide a preliminary benchmark of mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
42
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 105 publications
(73 citation statements)
references
References 273 publications
0
42
0
Order By: Relevance
“…Advances in deep learning (DL) have resulted in a set of tools for earth monitoring [4,5]. For instance, DL models have shown their advantages for land cover classification tasks [6] and segmentation of floods [7].…”
Section: Motivationmentioning
confidence: 99%
“…Advances in deep learning (DL) have resulted in a set of tools for earth monitoring [4,5]. For instance, DL models have shown their advantages for land cover classification tasks [6] and segmentation of floods [7].…”
Section: Motivationmentioning
confidence: 99%
“…Readers are invited to read a recent survey on self-supervised and semi-supervised approaches applied to remote sensing segmentation task in [102]. For a broader overview of self-supervised approaches in different remote sensing tasks, we refer them to another very recent preprint paper [103].…”
Section: Xu Et Al 2021 [101]mentioning
confidence: 99%
“…It is an eight-label classification task in which each image patch is defined to have up to eight adjacent patches. However, the feature representations learned by generative and discriminative methods are highly dependent on the used pretext task, and an ineffective pretext task might reduce the transfer-ability of a pre-trained model [45]. In stead of solving a single pretext task, contrastive approaches train models by maximising the similarity between the feature representations of two positive samples, e.g., two augmented views of the same instance.…”
Section: Introductionmentioning
confidence: 99%
“…In stead of solving a single pretext task, contrastive approaches train models by maximising the similarity between the feature representations of two positive samples, e.g., two augmented views of the same instance. However, simply following this approach will easily generate an identity map for each pair of positive samples, i.e., model collapse [45]. DINO [6], a state-of-the-art contrastive self-supervised model, utilizes knowledge distillation and centering and sharpening of the teacher network [23] to handle this issue.…”
Section: Introductionmentioning
confidence: 99%