2020 Design, Automation &Amp; Test in Europe Conference &Amp; Exhibition (DATE) 2020
DOI: 10.23919/date48585.2020.9116486
|View full text |Cite
|
Sign up to set email alerts
|

DeepRacing: A Framework for Autonomous Racing

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
16
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 22 publications
(16 citation statements)
references
References 8 publications
0
16
0
Order By: Relevance
“…The racing task provides a clear objective function (fastest lap time) for the algorithm training and the race track provides with its clear driveable area and one class of objects a perfect proving ground. Researchers in this field displayed partial end-to-end approaches (Weiss and Behl, 2020;Lee et al, 2019) that combine DNNs with MPC methods to create and follow dynamic trajectories. In addition, by using algorithms from the field of RL (e.g., Soft-Actor-Critic, Q-Learning), researchers were able to demonstrate how to train an agent to drive fast (Jaritz et al, 2018;de Bruin et al, 2018), how to train an agent to overtake other agents on the race track (Song et al, 2021) and how to bridge the sim-to-real gap with model-based RL approaches (Brunnbauer et al, 2021).…”
Section: Softwarementioning
confidence: 99%
“…The racing task provides a clear objective function (fastest lap time) for the algorithm training and the race track provides with its clear driveable area and one class of objects a perfect proving ground. Researchers in this field displayed partial end-to-end approaches (Weiss and Behl, 2020;Lee et al, 2019) that combine DNNs with MPC methods to create and follow dynamic trajectories. In addition, by using algorithms from the field of RL (e.g., Soft-Actor-Critic, Q-Learning), researchers were able to demonstrate how to train an agent to drive fast (Jaritz et al, 2018;de Bruin et al, 2018), how to train an agent to overtake other agents on the race track (Song et al, 2021) and how to bridge the sim-to-real gap with model-based RL approaches (Brunnbauer et al, 2021).…”
Section: Softwarementioning
confidence: 99%
“…In Cai et al (2021); Weiss and Behl (2020), RL agents were trained end-to-end on visual inputs by imitating expert demonstration; in Cai et al (2021), a data-driven model of the environment was further utilized to train the agent by unrolling future trajectories.…”
Section: Related Workmentioning
confidence: 99%
“…On one hand, Reinforcement Learning (RL) is used to train an agent in an adversarial environment. DeepRacing [48]- [50] provides solutions on three levels: pixel to control, pixel to waypoints, and pixel to curves. [51], [52] uses A3C to train racing agents.…”
Section: B Learning-based Planningmentioning
confidence: 99%