2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC) 2020
DOI: 10.1109/itsc45102.2020.9294729
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Stress Testing without Domain Heuristics using Go-Explore

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 13 publications
0
9
0
Order By: Relevance
“…GE has two phases, a tree search exploration phase, and a DRL robustification phase. While the original version of GE uses the state of the simulator when building the tree and training the robust policy, Koren and Kochenderfer (2020) modified the algorithm to use the history of disturbances instead, reducing the access requirements of the simulator.…”
Section: Deep Reinforcement Learningmentioning
confidence: 99%
See 2 more Smart Citations
“…GE has two phases, a tree search exploration phase, and a DRL robustification phase. While the original version of GE uses the state of the simulator when building the tree and training the robust policy, Koren and Kochenderfer (2020) modified the algorithm to use the history of disturbances instead, reducing the access requirements of the simulator.…”
Section: Deep Reinforcement Learningmentioning
confidence: 99%
“…In an intersection scenario, the system under test approaches a stoplight , stop sign (Abeysirigoonawardena et al, 2019), crosswalk (Koren et al, 2018), or other form of intersection and must proceed through without a failure. Failures can include collisions with pedestrians (Koren & Kochenderfer, 2020) or other vehicles . Failures may also include violations of traffic laws (Kress-Gazit & Pappas, 2008) or other rule-sets, such as those designed to prevent at-fault collisions (Hekmatnejad et al, 2020).…”
Section: Autonomous Drivingmentioning
confidence: 99%
See 1 more Smart Citation
“…DRL has shown state-of-the art results in playing Atari games [82], playing chess [74], and robot manipulation from camera input [83]. In recent years, different DRL techniques have been applied to falsification and most-likely failure analysis [22], [69], [72], [84]- [88].…”
Section: Deep Reinforcement Learningmentioning
confidence: 99%
“…While the original version of GE uses the state of the simulator when building the tree and training the robust policy, Koren and Kochenderfer [88] modified the algorithm to use the history of disturbances instead, reducing the access requirements of the simulator. GE was used for the falsification of an autonomous vehicle and was able to find counterexamples more reliably than MCTS on problems with long horizons and BA was able to find more probable failures than MCTS, or DRL.…”
Section: Deep Reinforcement Learningmentioning
confidence: 99%