2021
DOI: 10.48550/arxiv.2104.14548
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations

Abstract: Self-supervised learning algorithms based on instance discrimination train encoders to be invariant to pre-defined transformations of the same instance. While most methods treat different views of the same image as positives for a contrastive loss, we are interested in using positives from other instances in the dataset. Our method, Nearest-Neighbor Contrastive Learning of visual Representations (NNCLR), samples the nearest neighbors from the dataset in the latent space, and treats them as positives. This prov… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 20 publications
(30 citation statements)
references
References 47 publications
0
30
0
Order By: Relevance
“…Nearest-neighbor supervision Recently, researchers have exploited nearest-neighbor supervision to learn visual features (Dwibedi et al, 2021;Van Gansbeke et al, 2021). They find that using nearest-neighbor as positive samples in the contrastive loss improves the performances on multiple downstream tasks.…”
Section: Supervisionmentioning
confidence: 99%
“…Nearest-neighbor supervision Recently, researchers have exploited nearest-neighbor supervision to learn visual features (Dwibedi et al, 2021;Van Gansbeke et al, 2021). They find that using nearest-neighbor as positive samples in the contrastive loss improves the performances on multiple downstream tasks.…”
Section: Supervisionmentioning
confidence: 99%
“…This methodology has shown great promise in learning visual representations without annotation [2,23,29,30,36,43,47]. More recently, contrastive methods based on the Siamese structure achieve remarkable performance on downstream tasks [5,7,8,15,17,20,40,45,46], some of which even surpass supervised models.…”
Section: Contrastive Learningmentioning
confidence: 99%
“…However, the neighbors had to be computed off-line at fixed intervals during training. Concurrent to our work, Dwibedi et al [17] adopted nearest neighbors from a memory bank under the BYOL [20] framework. The authors focus on image classification datasets.…”
Section: Related Workmentioning
confidence: 99%