2018
DOI: 10.1109/access.2018.2807385
|View full text |Cite
|
Sign up to set email alerts
|

Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey

Abstract: Deep learning is at the heart of the current rise of artificial intelligence. In the field of Computer Vision, it has become the workhorse for applications ranging from self-driving cars to surveillance and security. Whereas deep neural networks have demonstrated phenomenal success (often beyond human capabilities) in solving complex problems, recent studies show that they are vulnerable to adversarial attacks in the form of subtle perturbations to inputs that lead a model to predict incorrect outputs. For ima… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

2
962
0
2

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 1,719 publications
(1,038 citation statements)
references
References 424 publications
2
962
0
2
Order By: Relevance
“…In safety critical applications like automatic drug injection in humans, guidance and navigation of autonomous vehicles or oil well drilling a black-box approach will be unacceptable. In fact the vulnerability of DNN have been been exposed beyond doubt in several recent works [137], [138], [139]. These models can also be extremely biased depending upon the data they were trained on.…”
Section: B Data-driven Modelingmentioning
confidence: 99%
“…In safety critical applications like automatic drug injection in humans, guidance and navigation of autonomous vehicles or oil well drilling a black-box approach will be unacceptable. In fact the vulnerability of DNN have been been exposed beyond doubt in several recent works [137], [138], [139]. These models can also be extremely biased depending upon the data they were trained on.…”
Section: B Data-driven Modelingmentioning
confidence: 99%
“…Critically, many high‐performing algorithms (eg, deep neural networks, proprietary models) are “black boxes,” since it is currently not understood how these models combine features to output the severity of a disorder. This creates a lack of trust since they have been shown to be fooled by adversarial attacks (ie, perceptually small manipulations in the inputs that create incorrect outputs) . This is why a recent European Union regulation requires a right to obtain an explanation of life‐affecting decisions from automated algorithms such as clinical assessments, and DARPA has released an Explainable Artificial Intelligence program to tackle these challenges (“Explain and interpret models to reduce bias and improve scientific understanding” guideline in the Section 4).…”
Section: Introductionmentioning
confidence: 99%
“…1 A) [39]. Worse, adversarial examples are often transferable across algorithms (see [1] for a recent review), and certain "universal" perturbations fool any algorithm.…”
Section: Introductionmentioning
confidence: 99%