2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2021
DOI: 10.1109/cvpr46437.2021.00978
|View full text |Cite
|
Sign up to set email alerts
|

AdvSim: Generating Safety-Critical Scenarios for Self-Driving Vehicles

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 74 publications
(38 citation statements)
references
References 21 publications
0
38
0
Order By: Relevance
“…It assumes the existence of a malicious attacker trying to fail the ego car by tampering with either the environment or the ego car's internal states directly. Regarding the former, the attacker creates adversarial examples or sends malicious signals to fool the ego car's sensor processing models, e.g., perturbing front camera images [16], [17], [18], road signs [19], [20], rendering malicious shapes on the road [21] or billboard [22], spoofing GPS signals [23], spoofing LiDAR signals [24], [25], or influencing both LiDAR and camera inputs [26]. Regarding the latter, an attacker can directly inject faults inside the system to fail it [27], [28], [29], [30], [31], [32], [33], [34], [35].…”
Section: Other Safety Assessment Methodsmentioning
confidence: 99%
“…It assumes the existence of a malicious attacker trying to fail the ego car by tampering with either the environment or the ego car's internal states directly. Regarding the former, the attacker creates adversarial examples or sends malicious signals to fool the ego car's sensor processing models, e.g., perturbing front camera images [16], [17], [18], road signs [19], [20], rendering malicious shapes on the road [21] or billboard [22], spoofing GPS signals [23], spoofing LiDAR signals [24], [25], or influencing both LiDAR and camera inputs [26]. Regarding the latter, an attacker can directly inject faults inside the system to fail it [27], [28], [29], [30], [31], [32], [33], [34], [35].…”
Section: Other Safety Assessment Methodsmentioning
confidence: 99%
“…Parallel work [51] generates a set of possible driving paths and identifies all the possible safe driving trajectories that can be taken starting at different times. Similarly, [59] directly optimize existing trajectories to perturb the driving paths of surrounding vehicles. They use Bayesian Optimization [111] for the optimization, and the scenario is represented with point cloud.…”
Section: Initial Conditionmentioning
confidence: 99%
“…We believe that we are the first to take causal interventions in static scenes to test AV detection systems, although multiple approaches (Ghodsi et al, 2021;Abeysirigoonawardena et al, 2019;Koren et al, 2018;Corso et al, 2019;O'Kelly et al, 2018;Rempe et al, 2021) test AV systems through adversarial manipulation of actor trajectories and operate on the planning subsystem. Wang et al (2021a) generates adversarial scenarios for AV systems by black-box optimization of actor trajectory perturbations, simulating LiDAR sensors in perturbed real scenes. Prior research has focused on optimization techniques for adversarial scenario generation through the manipulation of trajectories of vehicles and pedestrians.…”
Section: Related Workmentioning
confidence: 99%
“…The status quo approach to finding these groups in the AV stack operates in hindsight by analyzing real-world scenes requiring driver intervention or by feeding replayed or simulated scenes to a model and finding those that result in poor performance. Advanced techniques may use adversarial attacks to actively find failures (Xie et al, 2017;Athalye et al, 2017;Wang et al, 2021a;Rempe et al, 2021). In all cases, the found data is fed back into the retraining process.…”
Section: Introductionmentioning
confidence: 99%