2021
DOI: 10.48550/arxiv.2101.06784
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Exploring Adversarial Robustness of Multi-Sensor Perception Systems in Self Driving

Abstract: Modern self-driving perception systems have been shown to improve upon processing complementary inputs such as LiDAR with images. In isolation, 2D images have been found to be extremely vulnerable to adversarial attacks. Yet, there have been limited studies on the adversarial robustness of multi-modal models that fuse LiDAR features with image features. Furthermore, existing works do not consider physically realizable perturbations that are consistent across the input modalities. In this paper, we showcase pra… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
27
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(29 citation statements)
references
References 46 publications
0
27
0
Order By: Relevance
“…Thus, [40] also pivots to extracting valid occluded objects from KITTI, translating points to an attacker-desired location, and injecting them. Furthermore, [5,41,43] introduce attacks with adversarial patches and mesh objects that are optimized for color, shape, and texture, and digitally rendered into LiDAR and/or camera. Each attack performs optimization either over training data [5,41] or with white-box model access [43].…”
Section: Attacks On Perceptionmentioning
confidence: 99%
See 4 more Smart Citations
“…Thus, [40] also pivots to extracting valid occluded objects from KITTI, translating points to an attacker-desired location, and injecting them. Furthermore, [5,41,43] introduce attacks with adversarial patches and mesh objects that are optimized for color, shape, and texture, and digitally rendered into LiDAR and/or camera. Each attack performs optimization either over training data [5,41] or with white-box model access [43].…”
Section: Attacks On Perceptionmentioning
confidence: 99%
“…Furthermore, [5,41,43] introduce attacks with adversarial patches and mesh objects that are optimized for color, shape, and texture, and digitally rendered into LiDAR and/or camera. Each attack performs optimization either over training data [5,41] or with white-box model access [43]. However, the adversarial meshes may be too large to be physically realized outside of a fully-cyber context, and attacks are only evaluated on a single frame of data.…”
Section: Attacks On Perceptionmentioning
confidence: 99%
See 3 more Smart Citations