2021
DOI: 10.48550/arxiv.2101.10027
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Understanding and Achieving Efficient Robustness with Adversarial Supervised Contrastive Learning

Abstract: Contrastive learning (CL) has recently emerged as an effective approach to learning representation in a range of downstream tasks. Central to this approach is the selection of positive (similar) and negative (dissimilar) sets to provide the model the opportunity to 'contrast' between data and class representation in the latent space. In this paper, we investigate CL for improving model robustness using adversarial samples. We first designed and performed a comprehensive study to understand how adversarial vuln… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…A number of works utilize adversarially constructed samples in contrastive learning to increase the robustness of the learned feature representations Bui et al [2021], Kim et al [2020], Jiang et al [2020], Chen et al [2020b], McDermott et al [2021], Chen et al [2020a]. RoCL Kim et al [2020] introduces small perturbations to generate adversarial samples, leading to a more robust contrastive model.…”
Section: Adversarial Contrastive Trainingmentioning
confidence: 99%
See 2 more Smart Citations
“…A number of works utilize adversarially constructed samples in contrastive learning to increase the robustness of the learned feature representations Bui et al [2021], Kim et al [2020], Jiang et al [2020], Chen et al [2020b], McDermott et al [2021], Chen et al [2020a]. RoCL Kim et al [2020] introduces small perturbations to generate adversarial samples, leading to a more robust contrastive model.…”
Section: Adversarial Contrastive Trainingmentioning
confidence: 99%
“…Chen et al [2020b] analyze robustness gains after applying adversarial training in both self-supervised pre-training and supervised fine-tuning, and find that adversarial pre-training contributes the most in robustness improvements. Bui et al [2021] propose a minimax local-global algorithm to generate adversarial samples that improves the performance of SimCLR Chen et al [2020c]. To the best of our knowledge, none of the previous works considered designing more targeted contrastive adversarial examples by leveraging useful cluster structures observed in the data.…”
Section: Adversarial Contrastive Trainingmentioning
confidence: 99%
See 1 more Smart Citation
“…Several popular self-supervised learning approaches are based on contrastive learning: Sim-CLR [13], MoCo [28], and BYOL [26]. Most robust selfsupervised approaches focus on robust transfer learning [12,24,31,34,36,38,64] or multi-objective optimization [8,32,44] to improve adversarial robustness. The focus of these works differ from our focus on image retrieval.…”
Section: Related Workmentioning
confidence: 99%