2020
DOI: 10.1007/978-3-030-62144-5_5
|View full text |Cite
|
Sign up to set email alerts
|

Principal Component Properties of Adversarial Samples

Abstract: Deep Neural Networks for image classification have been found to be vulnerable to adversarial samples, which consist of sub-perceptual noise added to a benign image that can easily fool trained neural networks, posing a significant risk to their commercial deployment.In this work, we analyze adversarial samples through the lens of their contributions to the principal components of each image, which is different than prior works in which authors performed PCA on the entire dataset. We investigate a number of st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…Adversarial defenses. At earlier time, a few studies aim to detect adversarial examples [39], [40], [41]. However, it is wellknown that detection is inherently weaker than defense in terms of resisting adversarial attacks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Adversarial defenses. At earlier time, a few studies aim to detect adversarial examples [39], [40], [41]. However, it is wellknown that detection is inherently weaker than defense in terms of resisting adversarial attacks.…”
Section: Related Workmentioning
confidence: 99%
“…A few studies employ vanilla PCA to counter adversarial attacks for the image classification problem. Hendrycks & Gimpel [39] and Jere et al [40] utilized PCA to detect adversarial examples. Li & Li [41] performed PCA in the feature domain and used a cascade classifier to detect adversarial examples.…”
Section: A9 Comparison With the Defenses That Use Dimensionality Redu...mentioning
confidence: 99%
“…Meanwhile, there exists a thriving research corpus dedicated to deeply studying and understanding adversarial examples themselves. Ilyas et al (2019) presented a feature-based analysis of adversarial examples, while Jere et al (2019) presented preliminary work on PCA-based analysis of adversarial examples, which was followed up with Jere et al (2020) offering a nuanced view of the same through the lens of SVD. Ortiz-Jimenez et al (2020) derive insights from the margins of classifiers.…”
Section: Introductionmentioning
confidence: 99%