2017
DOI: 10.48550/arxiv.1707.08945
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Robust Physical-World Attacks on Deep Learning Models

Kevin Eykholt,
Ivan Evtimov,
Earlence Fernandes
et al.
Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
91
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 65 publications
(92 citation statements)
references
References 0 publications
1
91
0
Order By: Relevance
“…However, these methods often produce models that are not easily interpretable, as the relationship between the input and output data is convoluted. Furthermore, deep learning models are often brittle, i.e., small changes in input data can lead to dramatic differences in their predictions 12,13 . Interpretability helps confirm the model is behaving reasonably and, in some situations, can even elucidate the underlying physics of the task at hand.…”
Section: Graphical Abstractmentioning
confidence: 99%
“…However, these methods often produce models that are not easily interpretable, as the relationship between the input and output data is convoluted. Furthermore, deep learning models are often brittle, i.e., small changes in input data can lead to dramatic differences in their predictions 12,13 . Interpretability helps confirm the model is behaving reasonably and, in some situations, can even elucidate the underlying physics of the task at hand.…”
Section: Graphical Abstractmentioning
confidence: 99%
“…In a black-box attack scenario, hackers can create adversarial examples without knowledge of a target model's parameters by using another network to generate transferable attacks [25]. Furthermore, physical-world adversarial attacks have also fooled classification networks, including printed adversarial examples recaptured with a cell phone camera [3] and stop signs modified with tape perturbations mimicking graffiti [18]. These examples demonstrate the high-risk nature of adversarial examples, as well as the need to implement defenses against such attacks.…”
Section: Introductionmentioning
confidence: 99%
“…Many lack the ability to "explain their autonomous decisions to human users" [37] and some researchers have even suggested that there is an inverse relationship between system prediction quality and explainability [37]. Cyber-attackers have leveraged these weaknesses and neural networks' lack of context knowledge to develop adversarial neural network attacks [38] against systems providing services such as voice [39] and facial recognition [40], among others. Neural network algorithms are among those identified by Doyle [41] as "weapons of math destruction" and by Noble [42] as "algorithms of oppression".…”
Section: Neural Network and Their Explainability Issuesmentioning
confidence: 99%