2019 IEEE European Symposium on Security and Privacy (EuroS&P) 2019
DOI: 10.1109/eurosp.2019.00042
|View full text |Cite
|
Sign up to set email alerts
|

Towards Understanding Limitations of Pixel Discretization Against Adversarial Attacks

Abstract: Wide adoption of artificial neural networks in various domains has led to an increasing interest in defending adversarial attacks against them. Preprocessing defense methods such as pixel discretization are particularly attractive in practice due to their simplicity, low computational overhead, and applicability to various systems. It is observed that such methods work well on simple datasets like MNIST, but break on more complicated ones like ImageNet under recently proposed strong white-box attacks. To under… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 25 publications
(36 citation statements)
references
References 26 publications
(42 reference statements)
0
36
0
Order By: Relevance
“…Edizel et al (2019) attempt to learn typo-resistant word embeddings, but focus on common typos, rather than worst-case typos. In computer vision, Chen et al (2019) discretizes pixels to compute exact robust accuracy on MNIST, but their approach generalizes poorly to other tasks like CIFAR-10. Garg et al (2018) generate functions that map to robust features, while enforcing variation in outputs.…”
Section: Discussionmentioning
confidence: 99%
“…Edizel et al (2019) attempt to learn typo-resistant word embeddings, but focus on common typos, rather than worst-case typos. In computer vision, Chen et al (2019) discretizes pixels to compute exact robust accuracy on MNIST, but their approach generalizes poorly to other tasks like CIFAR-10. Garg et al (2018) generate functions that map to robust features, while enforcing variation in outputs.…”
Section: Discussionmentioning
confidence: 99%
“…Secondly, we evaluate the improvement of robustness via SSR. For the baseline methods, we include Madry's training [26] and IG-NORM [10], an recent proposed regularization to improve the robustness of Integrated Gradient.…”
Section: Experiments Ii: Robustness Via Regularizationmentioning
confidence: 99%
“…Towards robustness of attribution map, Singh et al [38] proposes soft margin loss to improve the alignment for attributions. Chen et al [10] propose two regularizations so that nearby points have similar Integrated Gradient. On the other hand, an ad-hoc regularization can be an extra budget for people with pretrained models.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…A recent line of works [3,38,1,8,9,10,30,32,33,34,40,45,46] studies a class of attacks on machine learning systems commonly known as adversarial examples or evasion attacks. Such attacks target a classifier C trained on problem Π.…”
Section: Introductionmentioning
confidence: 99%