2021
DOI: 10.1002/int.22680
|View full text |Cite
|
Sign up to set email alerts
|

ELAA: An efficient local adversarial attack using model interpreters

Abstract: Modern deep neural networks are highly vulnerable to adversarial examples, which attracts more and more researchers' attention to craft powerful adversarial examples. Most of these generation algorithms create global perturbations that would affect the visual quality of adversarial examples. To mitigate such drawbacks, some attacks attempt to generate local perturbations. However, existing local adversarial attacks are time‐consuming and the generated adversarial examples are still distinguishable from clean i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
14
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(14 citation statements)
references
References 32 publications
0
14
0
Order By: Relevance
“…38,39 In recent years, more and more attack algorithms have emerged. 40,41 In this paper, we focus on white box adversarial attacks, because the measurement criterion of defense technology is to resist white box attacks.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…38,39 In recent years, more and more attack algorithms have emerged. 40,41 In this paper, we focus on white box adversarial attacks, because the measurement criterion of defense technology is to resist white box attacks.…”
Section: Related Workmentioning
confidence: 99%
“…The universal perturbations have a remarkable generalization property, which can fool new images with high probability. In the automatic driving vehicle system using DNN to identify traffic signs, attackers classify “stop” signs as “speed limit.” 38,39 In recent years, more and more attack algorithms have emerged 40,41 . In this paper, we focus on white box adversarial attacks, because the measurement criterion of defense technology is to resist white box attacks.…”
Section: Related Workmentioning
confidence: 99%
“…By seeking the relationship between perturbation and object contours, FineFool provides a better attack performance with less perturbation. Guo et al 10 proposed a local attack method (ELAA) that uses a model interpretation approach to find the identification region, which is combined with the original adversarial attack to achieve. Other than the above works, a great deal of work has also been presented, aiming at obtaining AEs that are more robust to unknown and more defensive DNNs 31,38 .…”
Section: Related Workmentioning
confidence: 99%
“…In the past decade, a series of studies have shown that DNNs are vulnerable to adversarial examples (AEs) by imposing some designed perturbations to original images. [5][6][7][8][9][10] These perturbations are imperceptible to human beings but can easily fool DNNs, which raises invisible threats to the vision-based automatic decision. [11][12][13][14][15] Consequently, the robustness of DNNs encounters great challenges in real-world applications.…”
Section: Introductionmentioning
confidence: 99%
“…In [18,19], a disturbance observer (DO) for second-order MIAs was designed to deal with the consensus with external disturbances. A similar idea was used in the cases of modern deep neural networks in [20], the robust finite-time consensus in [21], and the inversion model in [22]. In [23,24], a control method was proposed to achieve the robust consensus with time-delays and exogenous disturbances.…”
mentioning
confidence: 99%