2022
DOI: 10.3390/e24040551
|View full text |Cite
|
Sign up to set email alerts
|

Survey on Self-Supervised Learning: Auxiliary Pretext Tasks and Contrastive Learning Methods in Imaging

Abstract: Although deep learning algorithms have achieved significant progress in a variety of domains, they require costly annotations on huge datasets. Self-supervised learning (SSL) using unlabeled data has emerged as an alternative, as it eliminates manual annotation. To do this, SSL constructs feature representations using pretext tasks that operate without manual annotation, which allows models trained in these tasks to extract useful latent representations that later improve downstream tasks such as object classi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
18
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 76 publications
(36 citation statements)
references
References 44 publications
0
18
0
Order By: Relevance
“…Within the mainstream Machine Learning literature, self-supervised representation learning [7] is an established paradigm to address the limited training data problem. A stream of methods following this paradigm, uses the notion of a pretext task [2], which does not require human labelling of the data. For instance, differentiating between known transformations of a given image and other images, is a pre-text task used by the contrastive learning methods [23].…”
Section: Pre-text Representation Transfer (Training-i)mentioning
confidence: 99%
“…Within the mainstream Machine Learning literature, self-supervised representation learning [7] is an established paradigm to address the limited training data problem. A stream of methods following this paradigm, uses the notion of a pretext task [2], which does not require human labelling of the data. For instance, differentiating between known transformations of a given image and other images, is a pre-text task used by the contrastive learning methods [23].…”
Section: Pre-text Representation Transfer (Training-i)mentioning
confidence: 99%
“…Alternatively, Koshkina et al [8] used contrastive learning and a triple loss function to generate features that were then used to cluster the players to their teams. Contrastive learning, however, is prone to mode collapse, in which all data maps to the same representation, which can make this method ineffective [29]. The research in [30] proposed TBE-Net to identify the identity using different views.…”
Section: Team Assignmentmentioning
confidence: 99%
“…Supervised learning works on labeled data and unsupervised learning works on un-labeled data. Manually labeling data is a time-consuming and labor-intensive process [ 1 ]. Self-Supervised Learning (SSL) is a solution to the aforementioned issues that has emerged as one of the most promising techniques that does not necessitate any expensive manual annotations.…”
Section: Introductionmentioning
confidence: 99%