2022 IEEE Intelligent Vehicles Symposium (IV) 2022
DOI: 10.1109/iv51971.2022.9827143
|View full text |Cite
|
Sign up to set email alerts
|

Traffic Sign Classifiers Under Physical World Realistic Sticker Occlusions: A Cross Analysis Study

Abstract: Recent adversarial attacks with real world applications are capable of deceiving deep neural networks (DNN), which often appear as printed stickers applied to objects in physical world. Though achieving high success rate in lab tests and limited field tests, such attacks have not been tested on multiple DNN architectures with a standard setup to unveil the common robustness and weakness points of both the DNNs and the attacks. Furthermore, realistic looking stickers applied by normal people as acts of vandalis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 20 publications
(43 reference statements)
0
2
0
Order By: Relevance
“…Intentional threats occur when bad actors maliciously take advantage of the shortcomings and weaknesses of AI techniques with the goal of interfering with the AI system and impairing safety-critical operations. Such attacks include painting the road to confuse drivers or placing stickers on a stop sign to obscure the view [182], [183], [184], [185], [186], and [187]. These modifications may cause the AI system to misclassify objects, which may therefore cause the AV to act in a potentially hazardous manner.…”
Section: Ai Vulnerabilities In Autonomous Drivingmentioning
confidence: 99%
“…Intentional threats occur when bad actors maliciously take advantage of the shortcomings and weaknesses of AI techniques with the goal of interfering with the AI system and impairing safety-critical operations. Such attacks include painting the road to confuse drivers or placing stickers on a stop sign to obscure the view [182], [183], [184], [185], [186], and [187]. These modifications may cause the AI system to misclassify objects, which may therefore cause the AV to act in a potentially hazardous manner.…”
Section: Ai Vulnerabilities In Autonomous Drivingmentioning
confidence: 99%
“…One recent work proposed three sticker application methods, namely RSA, SSA and MCSA, that can deceive the traffic sign recognition DNNs with realistic-looking stickers [4]. Another attack included painting the road, which targeted deep neural network models for end-to-end autonomous driving control [5].…”
Section: Adversarial Attacks On Avs' Perceptionmentioning
confidence: 99%