2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2020
DOI: 10.1109/cvpr42600.2020.00034
|View full text |Cite
|
Sign up to set email alerts
|

A Self-supervised Approach for Adversarial Robustness

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
126
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 160 publications
(134 citation statements)
references
References 17 publications
0
126
0
Order By: Relevance
“…Naseer et al [342] proposed self-supervised adversarial training, whereas adversarial training is independently analyzed for self-supervision by incorporating it in pretraining in [343]. Similarly, [344] [345], which uses perturbations in the image space as well as latent space of StyleGAN to make training more effective.…”
Section: A Model Alteration For Defensementioning
confidence: 99%
“…Naseer et al [342] proposed self-supervised adversarial training, whereas adversarial training is independently analyzed for self-supervision by incorporating it in pretraining in [343]. Similarly, [344] [345], which uses perturbations in the image space as well as latent space of StyleGAN to make training more effective.…”
Section: A Model Alteration For Defensementioning
confidence: 99%
“…In addition to the previous pixel-space objective, I g should lie close to I c in the feature space of a neural network feature extractor so that I g is semantically similar to I c . Inspired by the work of [26], we employ the feature space of the third convolutional block of an ImageNet-trained VGG16 network [35] F and L 2 distance to minimize the feature distortion between I g and I c . Subsequently, our feature reconstruction objective encourages G to minimize the following loss function:…”
Section: Feature Reconstructionmentioning
confidence: 99%
“…Because of the uncertainty and randomness, they show limitations on handling larger perturbations(e.g., when l 2 -norm perturbation size is larger than 0.6). Besides, some solutions are put forward from the perspective of feature space [24], [25], [26], [27]. By pushing adversarial examples to approximate benign ones via mapping and projection in latent feature space, they achieve further defensive improvement.…”
Section: Introductionmentioning
confidence: 99%