2017
DOI: 10.48550/arxiv.1707.03501
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

NO Need to Worry about Adversarial Examples in Object Detection in Autonomous Vehicles

Abstract: It has been shown that most machine learning algorithms are susceptible to adversarial perturbations. Slightly perturbing an image in a carefully chosen direction in the image space may cause a trained neural network model to misclassify it. Recently, it was shown that physical adversarial examples exist: printing perturbed images then taking pictures of them would still result in misclassification. This raises security and safety concerns.However, these experiments ignore a crucial property of physical object… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

3
76
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 92 publications
(79 citation statements)
references
References 20 publications
3
76
0
Order By: Relevance
“…2) Image Transformation: Different distances, angles, and illuminations will result in image transformations that impact the robustness of the physical AEs. AEs generated through the L-BFGS attack [5], fast gradient sign method [7], and the C&W attack [6] often lose their adversarial nature once subjected to minor transformations [35,36]. To address this challenge, Athalye et al [12] introduced the Expectation Over Transformation (EOT).…”
Section: ) Cross-domain Conversionmentioning
confidence: 99%
See 1 more Smart Citation
“…2) Image Transformation: Different distances, angles, and illuminations will result in image transformations that impact the robustness of the physical AEs. AEs generated through the L-BFGS attack [5], fast gradient sign method [7], and the C&W attack [6] often lose their adversarial nature once subjected to minor transformations [35,36]. To address this challenge, Athalye et al [12] introduced the Expectation Over Transformation (EOT).…”
Section: ) Cross-domain Conversionmentioning
confidence: 99%
“…The L 0 attack was the first published attack that caused the targeted misclassification on the ImageNet dataset. Although most of the digital AEs lose their adversarial nature in the physical environment [35,36], these three classic methods are still used for generating physical AEs.…”
Section: Related Workmentioning
confidence: 99%
“…For this reason, significant efforts have been made to improve the robustness of models without resorting to AT. Proposed solutions cover a wide range of techniques based on curvature regularization [17], robust optimization to improve local stability [23], the use of additional unlabeled data [4], local linearization [21], Parseval networks [5], defensive distillation [19], model ensembles [18], channel-wise activations suppressing [2], feature denoising [29], self-supervised learning for adversarial purification [25], and input manipulations [7,9,15]. All the listed techniques, except those based on input manipulations, require training the model or an auxiliary module from scratch.…”
Section: Related Workmentioning
confidence: 99%
“…In [15] the authors analyze the effect of image rescaling on adversarial examples. [7] explores the possibility to improve robustness through JPG (re)compression, based on the intuition that adversarial perturbations are unlikely to leave an image in the space of JPG images.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation