2018
DOI: 10.48550/arxiv.1807.03748
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Representation Learning with Contrastive Predictive Coding

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

4
1,445
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 869 publications
(1,456 citation statements)
references
References 0 publications
4
1,445
2
Order By: Relevance
“…Training. The model is trained by minimizing the canonical β-VQ-VAE loss without the auxiliary codebook loss (Oord et al, 2019) with β = 0.02 in all of our experiments. The codebook is updated using the EMA update step as proposed in the original paper.…”
Section: Vq-vae Modelmentioning
confidence: 99%
“…Training. The model is trained by minimizing the canonical β-VQ-VAE loss without the auxiliary codebook loss (Oord et al, 2019) with β = 0.02 in all of our experiments. The codebook is updated using the EMA update step as proposed in the original paper.…”
Section: Vq-vae Modelmentioning
confidence: 99%
“…Contrastive learning is a representation learning method that could refinement the distribution of the feature space to acquire a better semantic representation. It has been outstandingly successful in practice [3][4] [10][32] [64][65] [66] and involves learning transformation-invariant feature representations from unlabeled image data. The common strategy among those works is to pull an anchor towards a "positive" sample in the embedding space and to push the anchor away from many "negative" samples.…”
Section: Online Confusion Category Miningmentioning
confidence: 99%
“…Contrastive learning is a typical discriminative selfsupervised learning [3][4] [9][32] [34][34] [43] method that aims to learn useful representations of the input data without relying on task-specific manual annotations. Recent advances in self-supervised visual representation learning based on contrastive methods show that self-supervised representations outperform their supervised counterparts [5][34] [35][36] [37] on several downstream transfer learning benchmarks.…”
Section: B Contrastive Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…where f (x x x) ∈ R p is the neural network representation for an input x x x, and x x x, x x x + are drawn from the augmentations of the same natural data point, and x x x, x x x − are two augmentations generated independently either from the same data point or two different data points. The above loss function is similar to many standard contrastive loss functions (Oord et al, 2018;Sohn, 2016;Wu et al, 2018), including SimCLR (Chen et al, 2020) that we will use in our experiments. Minimizing this objective leads to representations with provable accuracy guarantees under linear probe evaluation.…”
Section: Self-supervised Contrastive Learningmentioning
confidence: 99%