2022
DOI: 10.48550/arxiv.2201.01850
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

On the Real-World Adversarial Robustness of Real-Time Semantic Segmentation Models for Autonomous Driving

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
6
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 41 publications
0
6
0
Order By: Relevance
“…Recently a plethora of new semantic segmentation methods [16], [17], [18], [19], [20], [21], [22], [23], [24] for visual scene understanding has emerged in the literature, eliciting impressive results. For instance, Nesti et al [19] presented a method that evaluates the robustness of semantic segmentation approaches for autonomous vehicles.…”
Section: A Background and Related Workmentioning
confidence: 99%
“…Recently a plethora of new semantic segmentation methods [16], [17], [18], [19], [20], [21], [22], [23], [24] for visual scene understanding has emerged in the literature, eliciting impressive results. For instance, Nesti et al [19] presented a method that evaluates the robustness of semantic segmentation approaches for autonomous vehicles.…”
Section: A Background and Related Workmentioning
confidence: 99%
“…After that, they test it against a simulator. Rossolini et al [28] test the robustness of semantic segmentation models by applying adversarial colored patches on the simulation and real images. The authors present extensive experiments to validate the proposed attack and defense approaches in real-world scenarios.…”
Section: Related Workmentioning
confidence: 99%
“…Our approach differs from Rossolini et al [28] since our focus is on the object detection task. Our approach also differs from [29], [27], [30] since these related works are focused on an in-depth analysis of one type of OOD data (e.g., adversarial, novelty, and distributional shift respectively).…”
Section: Related Workmentioning
confidence: 99%
“…This motivates us to investigate the robustness of these approaches under malicious attacks. Previous research has demonstrated that 3D perception systems can be easily compro-mised by adversarial examples [4,30,35], posing potential safety threats. However, these works primarily focus on causing a few targeted models to crash under specific conditions.…”
Section: Introductionmentioning
confidence: 99%