2019 IEEE Fourth International Conference on Data Science in Cyberspace (DSC) 2019
DOI: 10.1109/dsc.2019.00078
|View full text |Cite
|
Sign up to set email alerts
|

A Black-Box Approach to Generate Adversarial Examples Against Deep Neural Networks for High Dimensional Input

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…Szegedy et al [12] first pointed out a major weakness of DNNs in the context of image classification: by adding adversarial examples (e.g., small perturbations that the human eyes may not be able to perceive) to the input samples, the neural network classifier may be fooled, yielding inaccurate predictions on input image samples. Moreover, such a perturbation may propagate between different DNN models [13], increasing the probability of being fooled. These misclassified samples are called adversarial samples.…”
Section: B Related Workmentioning
confidence: 99%
“…Szegedy et al [12] first pointed out a major weakness of DNNs in the context of image classification: by adding adversarial examples (e.g., small perturbations that the human eyes may not be able to perceive) to the input samples, the neural network classifier may be fooled, yielding inaccurate predictions on input image samples. Moreover, such a perturbation may propagate between different DNN models [13], increasing the probability of being fooled. These misclassified samples are called adversarial samples.…”
Section: B Related Workmentioning
confidence: 99%
“…adversarial examples are then constructed using the known substitute parameters. As a boundary-based attack, the proposed approach bySong et al (2019) depends on computing the boundary of a classification result by utilizing a combination of linear fine-grained and Fibonacci searches. a zeroth-order algorithm byChen et al (2017) is utilized to estimate the gradient.…”
mentioning
confidence: 99%