Proceedings Third International Workshop on Automotive and Autonomous Vehicle Security 2021
DOI: 10.14722/autosec.2021.23034
|View full text |Cite
|
Sign up to set email alerts
|

WIP: End-to-End Analysis of Adversarial Attacks to Automated Lane Centering Systems

Abstract: Machine learning techniques, particularly those based on deep neural networks (DNNs), are widely adopted in the development of advanced driver-assistance systems (ADAS) and autonomous vehicles. While providing significant improvement over traditional methods in average performance, the usage of DNNs also presents great challenges to system safety, especially given the uncertainty of the surrounding environment, the disturbance to system operations, and the current lack of methodologies for predicting DNN behav… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
1

Relationship

2
3

Authors

Journals

citations
Cited by 6 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…In comparison, these works mainly focus on vulnerabilities at sensor level, while we focus on those at the higher autonomy software level, i.e., the "brain" of AD systems. At such level, prior works have studied the security of camera/LiDAR object detection [5], [6], [9], [10], [117] and tracking [118], localization [119], lane detection [120]- [122], traffic light detection [123], and end-toend AD [12], [13]. However, so far all of them only consider attacks on camera or LiDAR perception alone, while we are the first to study the security of MSF-based AD perception and address the corresponding design challenges ( §III-B).…”
Section: Related Workmentioning
confidence: 99%
“…In comparison, these works mainly focus on vulnerabilities at sensor level, while we focus on those at the higher autonomy software level, i.e., the "brain" of AD systems. At such level, prior works have studied the security of camera/LiDAR object detection [5], [6], [9], [10], [117] and tracking [118], localization [119], lane detection [120]- [122], traffic light detection [123], and end-toend AD [12], [13]. However, so far all of them only consider attacks on camera or LiDAR perception alone, while we are the first to study the security of MSF-based AD perception and address the corresponding design challenges ( §III-B).…”
Section: Related Workmentioning
confidence: 99%
“…The planner module in OpenPilot generates an estimation of this desired path by calculating a weighted average of the detected left lane line, right lane line and predicted path, with the weights being the confidence scores outputted by the perception module. Intuitively, if the perception module is less confident on the predicted lanes, the generated desired path relies more on the predicted path; otherwise, it will be closer to the weighted average of the predicted left lane line and right lane line [2], [22].…”
Section: B Planning and Control Modulesmentioning
confidence: 99%
“…Through our experiments, we will use the dirty road patch attack [12] as a case study, but we believe that our approach can be extended to other similar physical environment attacks. As discussed in our preliminary work [22], these physical attacks typically will render abnormal behavior in the perception output and then propagate through the entire pipeline.…”
Section: Attack Modelmentioning
confidence: 99%
See 2 more Smart Citations