2021
DOI: 10.1007/978-3-030-92182-8_10
|View full text |Cite
|
Sign up to set email alerts
|

Adapting Autonomous Agents for Automotive Driving Games

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
0
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 8 publications
0
0
0
Order By: Relevance
“…We notice that the training is quite rapid, smooth , and stable, without catastrophic forgetting [53], which frequently affects the DRL training. Low-level agent [15] Cont (20,36) Uniform(10-20) 0. 4 11 It appears that the high-level agent performs better under both the considered dimensions.…”
Section: High-level Decision Makingmentioning
confidence: 99%
See 2 more Smart Citations
“…We notice that the training is quite rapid, smooth , and stable, without catastrophic forgetting [53], which frequently affects the DRL training. Low-level agent [15] Cont (20,36) Uniform(10-20) 0. 4 11 It appears that the high-level agent performs better under both the considered dimensions.…”
Section: High-level Decision Makingmentioning
confidence: 99%
“…Concerning the proposed comparison, it must be specified that the two environments are very similar to each other, but not exactly the same. Table 2 shows that an episode in Campodonico et al [15] has fewer total vehicles (EV + NPVs), but they are slightly more concentrated (density factor 0.5 vs 0.4). The model presented in [15] was specifically developed with a large number of rewards compared to the original highway-env (11 vs 3), in order to better enforce traffic law and safer behavior (e.g., penalties for right overtake, unsafe distance to the front vehicle, hazardous lane change, steering angle).…”
Section: High-level Decision Makingmentioning
confidence: 99%
See 1 more Smart Citation
“…It must be noted that digital twins and simulation could also be used to support preparation of systems based on reinforcement learning and on other algorithms (Campodonico et al, 2021). The digital replica of our robot can be used as the training subject so save time, cost and safety concerns that would otherwise imply a training process on the real robot (Y. .…”
Section: State Of the Artmentioning
confidence: 99%