2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00113
|View full text |Cite
|
Sign up to set email alerts
|

AdCo: Adversarial Contrast for Efficient Learning of Unsupervised Representations from Self-Trained Negative Adversaries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
69
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 86 publications
(69 citation statements)
references
References 16 publications
0
69
0
Order By: Relevance
“…Synthesizing harder negatives in latent space using Mixup [53] has also been considered [25] but does not take an adversarial perspective. Other work, AdCo [20], also takes an adversarial viewpoint in latent space (see Tab. 1 for comparison to IFM).…”
Section: Visualizing Implicit Feature Modificationmentioning
confidence: 99%
See 3 more Smart Citations
“…Synthesizing harder negatives in latent space using Mixup [53] has also been considered [25] but does not take an adversarial perspective. Other work, AdCo [20], also takes an adversarial viewpoint in latent space (see Tab. 1 for comparison to IFM).…”
Section: Visualizing Implicit Feature Modificationmentioning
confidence: 99%
“…1 benchmarks IFM on ImageNet100 [44] using MoCo-v2, observing improvements of 0.9%. We also compare results on ImageNet100 to AdCo [20], another adversarial method for contrastive learning. We adopt the official code and use the exact same training and finetuning hyperparameters as for MoCo-v2 and IFM.…”
Section: Performance On Downstream Tasksmentioning
confidence: 99%
See 2 more Smart Citations
“…An important paradigm called contrastive learning aims to train an encoder to be contrastive between the representations of positive samples and negative samples [18], [47], [48], [49], [50]. Recent contrastive learning frameworks for graph data can be divided into two categories [7]: context-instance contrast and context-context contrast.…”
Section: Graph Contrastive Learningmentioning
confidence: 99%