2018
DOI: 10.48550/arxiv.1812.10812
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

DeepBillboard: Systematic Physical-World Testing of Autonomous Driving Systems

Abstract: Deep Neural Networks (DNNs) have been widely applied in many autonomous systems such as autonomous driving and robotics for their state-of-the-art, even human-competitive accuracy in cognitive computing tasks. Recently, DNN testing has been intensively studied to automatically generate adversarial examples, which inject small-magnitude perturbations into inputs to test DNNs under extreme situations. While existing testing techniques prove to be effective, particularly for autonomous driving, they mostly focus … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 29 publications
0
10
0
Order By: Relevance
“…Both the attacks were studied concerning different functional modules needed in vision-based autonomous driving. For example, the perturbation attack was studied regarding sign classification in [7], 2D object detection in [8], semantic segmentation in [9], [24], and monocular depth estimation in [12], [13], while the patch attack was studied regarding lane keeping in [5], optical flow estimation in [6], 2D object detection in [10], [11], and monocular depth estimation in [13]. None of these studies, however, focus directly on the attacks' impact on driving behavior and driving safety of autonomous vehicles.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Both the attacks were studied concerning different functional modules needed in vision-based autonomous driving. For example, the perturbation attack was studied regarding sign classification in [7], 2D object detection in [8], semantic segmentation in [9], [24], and monocular depth estimation in [12], [13], while the patch attack was studied regarding lane keeping in [5], optical flow estimation in [6], 2D object detection in [10], [11], and monocular depth estimation in [13]. None of these studies, however, focus directly on the attacks' impact on driving behavior and driving safety of autonomous vehicles.…”
Section: Related Workmentioning
confidence: 99%
“…Environment perception and other tasks of autonomous driving systems heavily rely on deep learning models. Researchers have demonstrated that adversarial examples, which are originally designed to affect general-purpose deep learning models, can also be used to cause malfunctions in autonomous driving tasks [5]- [14]. In these studies, researchers usually use the decline of accuracy, or the erroneous rate increase of the deep learning models, to measure the effectiveness of attacks.…”
Section: Introductionmentioning
confidence: 99%
“…[219]), adversarial testing inputs (e.g. [53,217,228]) or increase the coverage of the test suites (e.g. [49,126]).…”
Section: Software Testing (115 Studies)mentioning
confidence: 99%
“…Different from these techniques, D2C uses sequential pattern mining to aid the extraction of constraints from API documents to guide input generation for testing DL libraries. Testing DL models: Many fuzzing techniques test the robustness of DL models instead of DL libraries by finding adversarial inputs (e.g., images or natural language texts) for the models [23,28,31,39,43,44,60,63,64,66,68,71,77,79]. Testing DL models alone is insufficient, as DL libraries contain bugs [32,33,51,73,74], which hurt the accuracy and speed of the entire DL system [51].…”
Section: Related Workmentioning
confidence: 99%