2019 58th Annual Conference of the Society of Instrument and Control Engineers of Japan (SICE) 2019
DOI: 10.23919/sice.2019.8859883
|View full text |Cite
|
Sign up to set email alerts
|

Self Training Autonomous Driving Agent

Abstract: Intrinsically, driving is a Markov Decision Process which suits well the reinforcement learning paradigm. In this paper, we propose a novel agent which learns to drive a vehicle without any human assistance. We use the concept of reinforcement learning and evolutionary strategies to train our agent in a 2D simulation environment. Our model's architecture goes beyond the World Model's by introducing difference images in the auto encoder. This novel involvement of difference images in the auto-encoder gives bett… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 12 publications
0
2
0
Order By: Relevance
“…Another approach could be the simplification of the unstructured data. In [71] Kotyan et al uses the difference image as the background subtraction between the two consecutive frames as an input, assuming this image contains the motion of the foreground and the underlying neural network would focus more on the features of the foreground than the background. By using the same training algorithm, their results showed that the including difference image instead of the original unprocessed input needs approximately 10 times less training steps to achieve the same performance.…”
Section: E Observation Spacementioning
confidence: 99%
“…Another approach could be the simplification of the unstructured data. In [71] Kotyan et al uses the difference image as the background subtraction between the two consecutive frames as an input, assuming this image contains the motion of the foreground and the underlying neural network would focus more on the features of the foreground than the background. By using the same training algorithm, their results showed that the including difference image instead of the original unprocessed input needs approximately 10 times less training steps to achieve the same performance.…”
Section: E Observation Spacementioning
confidence: 99%
“…Another approach could be the simplification of the unstructured data. In [58] Kotyan et al uses the difference image as the background subtraction between the two consecutive frames as an input, assuming this image contains the motion of the foreground and the underlying neural network would focus more on the features of the foreground than the background. By using the same training algorithm, their results showed that the including difference image instead of the original unprocessed input needs approximately 10 times less training steps to achieve the same performance.…”
Section: E Observation Spacementioning
confidence: 99%