2021 IEEE/CVF International Conference on Computer Vision (ICCV) 2021
DOI: 10.1109/iccv48922.2021.00945
|View full text |Cite
|
Sign up to set email alerts
|

With a Little Help from My Friends: Nearest-Neighbor Contrastive Learning of Visual Representations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
118
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 260 publications
(149 citation statements)
references
References 29 publications
1
118
0
Order By: Relevance
“…Contrastive Learning has shown remarkable advantages in self-supervised learning [6,11,15,18,45,48]. The contrastive loss measures the similarity of representation pairs and attempts to distinguish between positive and negative pairs.…”
Section: Contrastive Representation Learningmentioning
confidence: 99%
“…Contrastive Learning has shown remarkable advantages in self-supervised learning [6,11,15,18,45,48]. The contrastive loss measures the similarity of representation pairs and attempts to distinguish between positive and negative pairs.…”
Section: Contrastive Representation Learningmentioning
confidence: 99%
“…The key idea of instance discrimination is to treat each instance as a single category. Aside from generating positive samples from a single instance, other approaches assign samples from the same cluster or nearest neighbours as positives [5,10,18]. In order to eliminate the requirement of the massive number of negative samples, BYOL [14] and SimSiam [9] achieve competitive performance without any negative instance.…”
Section: Related Workmentioning
confidence: 99%
“…Among them, BarlowTwins [57] considers an objective function measuring the cross-correlation matrix between the features, and VicReg [3] uses a mix of variance, invariance and covariance regularizations. Methods such as [19] have explored the use of nearest-neighbour retrieval and divide and conquer [52]. However, none of these works studied the ability of SSL methods to learn continually and adaptively.…”
Section: Related Workmentioning
confidence: 99%
“…Self-Supervised Learning. The training procedure of several state-of-the-art SSL methods [3,7,8,13,19,26,28,57] can be summarized as follows. Given an image x in a batch sampled from a distribution D, two correlated views x A and x B are extracted by applying stochastic image augmentations, such as random cropping, color jittering and horizontal flipping.…”
Section: Preliminariesmentioning
confidence: 99%