2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00891
|View full text |Cite
|
Sign up to set email alerts
|

Improving the Transferability of Adversarial Samples with Adversarial Transformations

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 59 publications
(28 citation statements)
references
References 22 publications
0
27
0
Order By: Relevance
“…The second line of defenses is carried out by input transformation to purify the adversarial examples. They preprocess the inputs to cleanse adversarial perturbations without reducing the classification accuracy on clean images (Wu et al 2021). The advanced defenses of this kind include applying random resizing and padding (R&P) (Xie et al 2018), a high-level representation denoiser (HGD) (Liao et al 2018), JPEG compression (JPEG) (Guo et al 2018), feature distillation (FD) (Liu et al 2019b), feature squeezing method: bit reduction (BIT) (Xu, Evans, and Qi 2018) and a neural representation purifier (NRP) (Naseer et al 2020).…”
Section: Defend Against Adversarial Attacksmentioning
confidence: 99%
“…The second line of defenses is carried out by input transformation to purify the adversarial examples. They preprocess the inputs to cleanse adversarial perturbations without reducing the classification accuracy on clean images (Wu et al 2021). The advanced defenses of this kind include applying random resizing and padding (R&P) (Xie et al 2018), a high-level representation denoiser (HGD) (Liao et al 2018), JPEG compression (JPEG) (Guo et al 2018), feature distillation (FD) (Liu et al 2019b), feature squeezing method: bit reduction (BIT) (Xu, Evans, and Qi 2018) and a neural representation purifier (NRP) (Naseer et al 2020).…”
Section: Defend Against Adversarial Attacksmentioning
confidence: 99%
“…With the improvement of Artificial Intelligence (AI) models, companies tend to deploy AI in real-world applications like autonomous driving and neural machine translation [59]. However, AI software inherits the deficiencies of AI models that they are prone to erroneous behavior given particular inputs [2,4,5,15,21,[76][77][78]. A line of research has been conducted to test AI software systems to address this problem.…”
Section: Related Work 61 Testing Ai Softwarementioning
confidence: 99%
“…Transfer-based adversarial attacks in image classification models often achieve limited success due to the overfitting of local models. However, mechanisms [41,42] have been proposed to circumvent the overfitting issue, promoting the transferability of adversarial samples. While empirical evidence shows that transfer attacks are successful, Demontis et al take it a step further and perform a comprehensive analysis [10] to study underlying reasons for the transferability of attacks.…”
Section: Transfer Attacksmentioning
confidence: 99%