2019
DOI: 10.1109/tnnls.2019.2933524
|View full text |Cite
|
Sign up to set email alerts
|

Adversarial Examples: Opportunities and Challenges

Abstract: Deep neural networks (DNNs) have shown huge superiority over humans in image recognition, speech processing, autonomous vehicles and medical diagnosis. However, recent studies indicate that DNNs are vulnerable to adversarial examples (AEs) which are designed by attackers to fool deep learning models. Different from real examples, AEs can mislead the model to predict incorrect outputs while hardly be distinguished by human eyes, therefore threaten security-critical deep-learning applications. In recent years, t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
63
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 135 publications
(108 citation statements)
references
References 65 publications
0
63
0
Order By: Relevance
“…The two major lines of research around adversarial examples have been: (1) generating AEs and (2) defending against AEs. This paper will not cover either, and the reader is referred to recent surveys [1,6,40]. In parallel to those two lines, however, a significant body of work has been carried out to delve into the root causes of AEs and their implications.…”
Section: Previous Workmentioning
confidence: 99%
“…The two major lines of research around adversarial examples have been: (1) generating AEs and (2) defending against AEs. This paper will not cover either, and the reader is referred to recent surveys [1,6,40]. In parallel to those two lines, however, a significant body of work has been carried out to delve into the root causes of AEs and their implications.…”
Section: Previous Workmentioning
confidence: 99%
“…The probability of class "1" before and after perturbation. Before perturbating, the score of class "1" is 5%; however, the score is increased to 88% by adding or subtracting 0.5 from each dimension in a specific direction of the original sample (Image Credit: Zhang et al [59]).…”
Section: Causes Of Adversarial Examplesmentioning
confidence: 99%
“…Adversarial samples have three basic characteristics [59], i.e., transferability, regularization effect, and adversarial instability.…”
Section: Characteristics Of Adversarial Examplesmentioning
confidence: 99%
“…With the rise of neural networks, adversarial examples and their mitigation [7,16,39] have become subject to competitive research, especially in the image domain, see Section 2.1. Analog to images, one can generate adversarial examples for audio data, see Section 2.2.…”
Section: Related Workmentioning
confidence: 99%
“…The first adversarial examples to non-linear algorithms were generated by Biggio et al [4] and Szegedy et al [31] in 2013. Up until now, various attacks on plenty of datasets focusing on images have been presented (for an overview, see, e.g., [39]). Later, adversarial audio started to get analysed as well [10,12,17,25,26,33,35].…”
Section: Related Workmentioning
confidence: 99%