2019 IEEE Intelligent Transportation Systems Conference (ITSC) 2019
DOI: 10.1109/itsc.2019.8917406
|View full text |Cite
|
Sign up to set email alerts
|

Evaluation Uncertainty in Data-Driven Self-Driving Testing

Abstract: Safety evaluation of self-driving technologies has been extensively studied. One recent approach uses Monte Carlo based evaluation to estimate the occurrence probabilities of safety-critical events as safety measures. These Monte Carlo samples are generated from stochastic input models constructed based on real-world data. In this paper, we propose an approach to assess the impact on the probability estimates from the evaluation procedures due to the estimation error caused by data variability. Our proposed me… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 13 publications
(4 citation statements)
references
References 19 publications
1
3
0
Order By: Relevance
“…According to them, the procedure is more efficient than the random selection of test scenarios, but no quantitative statement is made. There is similar work from Huang et al [98], Huang et al [99], Huang et al [100], and Huang et al [101], [102] as well as from other members of their research group [103], [104].…”
Section: B Sampling From Parameter Distributionssupporting
confidence: 54%
“…According to them, the procedure is more efficient than the random selection of test scenarios, but no quantitative statement is made. There is similar work from Huang et al [98], Huang et al [99], Huang et al [100], and Huang et al [101], [102] as well as from other members of their research group [103], [104].…”
Section: B Sampling From Parameter Distributionssupporting
confidence: 54%
“…Some recent benchmarks [199] use realistic 3D simulators to construct real-world scenarios and use accelerated evaluation methods [10,204] to emphasize the rare safety-critical cases. However, there is a trade-off between the modeling error and evaluation error [74].…”
Section: How To Design Evaluation Platforms For Trustworthy Rl?mentioning
confidence: 99%
“…This is of particular importance since the uncertainty in the final metrics is due to both modeling uncertainty and sampling uncertainty. While the former can be driven down by designing as accurate a model as possible, the latter can only be reduced by running longer , and larger number simulation runs [91]. A good review for this input-induced uncertainty and ways to deal with it in modeling can be found in [92]- [94].…”
Section: Metrics For Generationmentioning
confidence: 99%