2019 IEEE Intelligent Vehicles Symposium (IV) 2019
DOI: 10.1109/ivs.2019.8814124
|View full text |Cite
|
Sign up to set email alerts
|

Controlling an Autonomous Vehicle with Deep Reinforcement Learning

Abstract: We present a control approach for autonomous vehicles based on deep reinforcement learning. A neural network agent is trained to map its estimated state to acceleration and steering commands given the objective of reaching a specific target state while considering detected obstacles. Learning is performed using state-of-the-art proximal policy optimization in combination with a simulated environment. Training from scratch takes five to nine hours. The resulting agent is evaluated within simulation and subseque… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(20 citation statements)
references
References 14 publications
0
18
0
Order By: Relevance
“…Deep Reinforcement learning is nowadays the most popular technique for (artificial) agent to learn closely optimal strategy by experience. Majors companies are training self driving cars using reinforcement learning (see Folkers et al (2019), or Kiran et al (2020) for a state-of-the-art). Such techniques are extremely powerful to models behaviors of animals, consumers, investors, etc.…”
Section: Discussionmentioning
confidence: 99%
“…Deep Reinforcement learning is nowadays the most popular technique for (artificial) agent to learn closely optimal strategy by experience. Majors companies are training self driving cars using reinforcement learning (see Folkers et al (2019), or Kiran et al (2020) for a state-of-the-art). Such techniques are extremely powerful to models behaviors of animals, consumers, investors, etc.…”
Section: Discussionmentioning
confidence: 99%
“…6. The surrounding from the perspective of the vehicle can be described by a coarse perception map where the target is represented by a red dot (c) (source: [78]). since there is no reflection, which is provided by TORCS and used in [20], is to represent the lane markings with imagined beam sensors.…”
Section: E Observation Spacementioning
confidence: 99%
“…Though the previously cited examples didn't use RL techniques, they prove that grid representation holds high potential in this field. Navigation in a static environment by using a grid map as the observation, together with position and yaw of the vehicle with an RL agent, is presented in [78] (See Fig.6). Grid maps are also unstructured data, and their complexity is similar to the semantically segmented images, since the cells store class information in both cases, and hence their optimal handling is using the CNN architecture.…”
Section: E Observation Spacementioning
confidence: 99%
See 1 more Smart Citation
“…Reinforcement learning (RL) based methods usually use an end-to-end approach, trying to generate the direct steering, throttle, and brake commands based on the environmental information available. These researches use a various set of sensor models, such as grid-based topological [19], lidar-like beam sensors [20], camera information [21] or the high-level ground truth position information [22]. Another group of researches focus on strategic decisions, where the agent determines high-level actions, such as lane-change, follow, etc.…”
Section: A Related Workmentioning
confidence: 99%