Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Softw 2020
DOI: 10.1145/3368089.3409750
|View full text |Cite
|
Sign up to set email alerts
|

DeepSearch: a simple and effective blackbox attack for deep neural networks

Abstract: Although deep neural networks have been very successful in imageclassification tasks, they are prone to adversarial attacks. To generate adversarial inputs, there has emerged a wide variety of techniques, such as black-and whitebox attacks for neural networks. In this paper, we present DeepSearch, a novel fuzzing-based, queryefficient, blackbox attack for image classifiers. Despite its simplicity, DeepSearch is shown to be more effective in finding adversarial inputs than state-of-the-art blackbox approaches. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
2

Relationship

2
6

Authors

Journals

citations
Cited by 33 publications
(20 citation statements)
references
References 51 publications
0
18
0
Order By: Relevance
“…Subtle imperceptible perturbations of inputs, known as adversarial examples, can change their prediction results. Various algorithms [Carlini and Wagner 2017b;Goodfellow et al 2015;Madry et al 2018;Tabacof and Valle 2016;Zhang et al 2020] have been proposed that can effectively find adversarial examples. Research on developing defense mechanisms against adversarial examples Wagner 2016, 2017a,b;Cornelius 2019;Engstrom et al 2018;Goodfellow et al 2015;Huang et al 2015;Mirman et al , 2019 is also active.…”
Section: Robustness Of Deep Neuralmentioning
confidence: 99%
See 1 more Smart Citation
“…Subtle imperceptible perturbations of inputs, known as adversarial examples, can change their prediction results. Various algorithms [Carlini and Wagner 2017b;Goodfellow et al 2015;Madry et al 2018;Tabacof and Valle 2016;Zhang et al 2020] have been proposed that can effectively find adversarial examples. Research on developing defense mechanisms against adversarial examples Wagner 2016, 2017a,b;Cornelius 2019;Engstrom et al 2018;Goodfellow et al 2015;Huang et al 2015;Mirman et al , 2019 is also active.…”
Section: Robustness Of Deep Neuralmentioning
confidence: 99%
“…Odena et al [Odena et al 2019] were the first to develop coverage-guided fuzzing for neural networks. Zhang et al [Zhang et al 2020] proposed a blackbox-fuzzing technique to test their robustness.…”
Section: Robustness Of Deep Neuralmentioning
confidence: 99%
“…The main property of adversarially perturbed images is that they are very close from a human point of view to the original benign image, but trick the model into predicting the wrong class. While the notion of "closeness from a human perspective" is hard to quantify, there exists general consensus around using the L 0 , L 2 , and L ∞ norms as proxies for measuring the adversarial perturbations [31,23,22,11,32].…”
Section: Introductionmentioning
confidence: 99%
“…EvoBA focuses on the L 0 norm, is fast, query-efficient, and effective. Therefore, we propose using it together with similarly fast and efficient methods that focus on different norms, such as SimBA, which focuses on L 2 , and DeepSearch ( [32]), which focuses on L ∞ , to empirically evaluate the robustness of image classifiers. These methods can act together as a fast and general toolbox used along the way of developing systems involving image classification models.…”
Section: Introductionmentioning
confidence: 99%
“…Odena et al[24] first design a coverageguided fuzzing framework for testing deep neural networks. Zhang et al[28] develop a fuzzing-based blackbox adversarial attack for DNNs. Moreover, Khmelnitsky et al[9] design an technique to extract a surrogate automata model to analyze and verify regular properties for recurrent neural networks.…”
mentioning
confidence: 99%