2021
DOI: 10.1109/tkde.2021.3090866
|View full text |Cite
|
Sign up to set email alerts
|

Self-supervised Learning: Generative or Contrastive

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
405
0
5

Year Published

2021
2021
2024
2024

Publication Types

Select...
9
1

Relationship

0
10

Authors

Journals

citations
Cited by 817 publications
(513 citation statements)
references
References 62 publications
0
405
0
5
Order By: Relevance
“…Moreover, fine-tuning with labeled samples can follow a self-supervised representation \metric learning procedure, where positive and negative pairs are generated automatically from a large amount of unlabeled images and are used to train a deep learning backbone (e.g., ResNet) to minimize the feature distance of positive pair samples and maximize the feature distance for negative samples [153,154]. Researchers [155] proposed a metric learning model named discriminative CNN(D-CNN) to perform metric learning and obtained state-of-the-art performance on three public RS datasets for scene classification tasks, showing promising results of using self-supervised learning approaches for landcover mapping task.…”
Section: Model Fine-tuningmentioning
confidence: 99%
“…Moreover, fine-tuning with labeled samples can follow a self-supervised representation \metric learning procedure, where positive and negative pairs are generated automatically from a large amount of unlabeled images and are used to train a deep learning backbone (e.g., ResNet) to minimize the feature distance of positive pair samples and maximize the feature distance for negative samples [153,154]. Researchers [155] proposed a metric learning model named discriminative CNN(D-CNN) to perform metric learning and obtained state-of-the-art performance on three public RS datasets for scene classification tasks, showing promising results of using self-supervised learning approaches for landcover mapping task.…”
Section: Model Fine-tuningmentioning
confidence: 99%
“…There are two approaches: one that uses statistical learning, developed since the beginning of AI in the 1950's, and the other exploiting logical learning where rules are defined to create a description of the element of interest. It is assumed that input data are from outcomes or from rules, and it is always provided by humans [30,34,35].…”
Section: Epidemiological Data Analysismentioning
confidence: 99%
“…In contrast, as a new paradigm between unsupervised and supervised learning, SSL can generate labels based on the property of unlabeled data itself to train the neural network in a supervised manner similar to natural learning experiences. With excellent performance on representation learning and dealing with the issue of unlabelled data, SSL [20][21][22] has been successfully implemented in a wide range of fields, including image recognition 23 , audio representation 24 , computer vision 25 , document reconstruction 26 , atmosphere 27 , astronomy 28 , medical 29 , person re-identification 30 , remote sensing 31 , robotics 32 , omnidirectional imaging 33 , manufacturing 34 , nano-photonics 35 , and civil engineering 36 , etc. However, this method has not been formally attempted in material science.…”
Section: High-efficient Low-cost Characterization Of Materials Proper...mentioning
confidence: 99%