2022
DOI: 10.48550/arxiv.2204.10314
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Adversarial Contrastive Learning by Permuting Cluster Assignments

Abstract: Contrastive learning has gained popularity as an effective self-supervised representation learning technique. Several research directions improve traditional contrastive approaches, e.g., prototypical contrastive methods better capture the semantic similarity among instances and reduce the computational burden by considering cluster prototypes or cluster assignments, while adversarial instance-wise contrastive methods improve robustness against a variety of attacks. To the best of our knowledge, no prior work … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 17 publications
1
2
0
Order By: Relevance
“…Adversarial vulnerability under standard training: In agreement with [5], [7], [8], the results obtained from scenario ST-ST show the adversarial vulnerability of the studied contrastive learning schemes. We also observed that semisupervised learning schemes SL-CL or SCL-CL achieve better robust performance than CL scheme.…”
Section: Discussionsupporting
confidence: 85%
See 1 more Smart Citation
“…Adversarial vulnerability under standard training: In agreement with [5], [7], [8], the results obtained from scenario ST-ST show the adversarial vulnerability of the studied contrastive learning schemes. We also observed that semisupervised learning schemes SL-CL or SCL-CL achieve better robust performance than CL scheme.…”
Section: Discussionsupporting
confidence: 85%
“…Kim et al [5] were the first to utilize the contrastive loss to generate adversarial examples without any label for robustifying of SimCLR [6] framework. Moshavash et al [7] and Wahed et al [8] have applied the same technique on Momentum Contrast (MOCO) [9] and Swapping Assignments between Views (SwAV) [10], respectively. Fan et al [11] introduced an additional regularization term in contrastive loss to enhance cross-task robustness transferability.…”
Section: Introductionmentioning
confidence: 99%
“…All other settings are the same as original papers except for some hyperparameter tuning. Our methods are also compatible with some recent work like SwARo [46] and CLAF [38], by modeling the asymmetry between clean and adversarial views as aforementioned.…”
Section: Methodsmentioning
confidence: 70%