2018
DOI: 10.48550/arxiv.1808.04913
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Auto-tuning Framework for Autonomous Vehicles

Abstract: Many autonomous driving motion planners generate trajectories by optimizing a reward/cost functional. Designing and tuning a high-performance reward/cost functional for Level-4 autonomous driving vehicles with exposure to different driving conditions is challenging. Traditionally, reward/cost functional tuning involves substantial human effort and time spent on both simulations and road tests. As the scenario becomes more complicated, tuning to improve the motion planner performance becomes increasingly diffic… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 15 publications
0
7
0
Order By: Relevance
“…This is achieved by reasoning about separate contingent trajectories that are safe for each of the predicted scenarios. This stands in contrast to previous work [9,26,1,40], where a single motion plan was generated by minimizing the expected cost over all future scenarios. Such approaches, for instance, might slow down prematurely to yield to a very unlikely future trajectory from another vehicle, while our contingency planner can maintain its speed as long as it is able to stop timely before reaching the conflict region.…”
Section: Introductionmentioning
confidence: 91%
See 1 more Smart Citation
“…This is achieved by reasoning about separate contingent trajectories that are safe for each of the predicted scenarios. This stands in contrast to previous work [9,26,1,40], where a single motion plan was generated by minimizing the expected cost over all future scenarios. Such approaches, for instance, might slow down prematurely to yield to a very unlikely future trajectory from another vehicle, while our contingency planner can maintain its speed as long as it is able to stop timely before reaching the conflict region.…”
Section: Introductionmentioning
confidence: 91%
“…However, the above formulation is not exploiting the fact that only one of the predicted scenarios will happen in future and is conversely optimizing for a single trajectory that is "good" in expectation. By changing the expectation in (9) to max, the planner will optimize for the worst-case scenario regardless of the likelihood of that scenario. Consequently the planner will become over-conservative, e.g.…”
Section: Contingency Plannermentioning
confidence: 99%
“…More recently, cost map-based approaches have been shown to adapt better to challenging environments, which recover a trajectory by looking for local minima on the cost map. The cost map may be parameterized as a simple linear combination of hand crafted costs [14,46], or in a general non-parametric form [55]. To bridge the gap between interpretability and expressivity, [45] proposed a model that leverages supervision to learn an interpretable nonparametric occupancy that can be directly used in motion planner, with hand-crafted subcosts.…”
Section: Related Workmentioning
confidence: 99%
“…Conversely, discretization is avoided in [9], by introducing a continuous non-linear optimization method where obstacles in the form of polygons are converted to quadratic constraints. c) Learned Motion Planning: Learning approaches to motion planning have mainly been studied from an imitation learning (IL) [10], [11], [12], [13], [14], [15], [16], or reinforcement learning (RL) [17], [18], [19], [20] perspective. While most IL approaches provide an end-to-end training framework to control outputs from sensory data, they suffer from compounding errors due to the sequential decision making process of self-driving.…”
Section: Related Workmentioning
confidence: 99%