2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00894
|View full text |Cite
|
Sign up to set email alerts
|

Deflecting Adversarial Attacks with Pixel Deflection

Abstract: CNNs are poised to become integral parts of many critical systems. Despite their robustness to natural variations, image pixel values can be manipulated, via small, carefully crafted, imperceptible perturbations, to cause a model to misclassify images. We present an algorithm to process an image so that classification accuracy is significantly preserved in the presence of such adversarial manipulations. Image classifiers tend to be robust to natural noise, and adversarial attacks tend to be agnostic to object … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
184
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 243 publications
(186 citation statements)
references
References 29 publications
2
184
0
Order By: Relevance
“…where the only difference between {B k l } L l=1 and {B k−1 l } L l=1 are the bits flipped in Eq. (10). Note that, those bits flipped tob k l in Eq.…”
Section: Progressive Bit Searchmentioning
confidence: 99%
“…where the only difference between {B k l } L l=1 and {B k−1 l } L l=1 are the bits flipped in Eq. (10). Note that, those bits flipped tob k l in Eq.…”
Section: Progressive Bit Searchmentioning
confidence: 99%
“…A foveation based method was proposed by Luo et al [32], which shows robustness against weak attacks like L-BFGS [13] and FGSM [14]. Another closely related work to ours is that of Prakash et al [26], which deflects attention by carefully corrupting less critical image pixels. This introduces new artifacts which reduce the image quality and can result in misclassification.…”
Section: Adversarial Defensesmentioning
confidence: 96%
“…To handle such artifacts, BayesShrink denoising in the wavelet domain is used. It has been shown that denoising in the wavelet domain yields superior performance than other techniques such as bilateral, an-isotropic, TVM and Wiener-Hunt deconvolution [26]. Another closely related work is that of Xie et al [18], which performs image transformations by randomly re-sizing and padding an image before passing it through a CNN classifier.…”
Section: Adversarial Defensesmentioning
confidence: 99%
“…Pixel Deflection [23]: A random pixel is replaced by another random pixel in a local neighborhood. It works well due to the assumption that adversarial attacks rely on specific activation functions, i.e., only some pixels are manipulated to make the attack work.…”
Section: Adversarial Attacks and Defensesmentioning
confidence: 99%