2018 IEEE Intelligent Vehicles Symposium (IV) 2018
DOI: 10.1109/ivs.2018.8500675
|View full text |Cite
|
Sign up to set email alerts
|

Highway Traffic Modeling and Decision Making for Autonomous Vehicle Using Reinforcement Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
21
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 41 publications
(21 citation statements)
references
References 18 publications
0
21
0
Order By: Relevance
“…Besides, reinforcement learning is proved to be an effective way for decision-making problem. Interaction between the vehicles and the environment can be formulated as a Markov Decision Process (MDP), then Q-learning algorithm [58,59] can be performed to optimize the route recommendation results for real-time navigation.…”
Section: Machine Learning-based Route Recommendationmentioning
confidence: 99%
“…Besides, reinforcement learning is proved to be an effective way for decision-making problem. Interaction between the vehicles and the environment can be formulated as a Markov Decision Process (MDP), then Q-learning algorithm [58,59] can be performed to optimize the route recommendation results for real-time navigation.…”
Section: Machine Learning-based Route Recommendationmentioning
confidence: 99%
“…For decision making, the action space is typically a set of discretized decision choices. You, et al [7] adopted Q-learning method for vehicle decision making in highway driving scenarios, where the agent learns to accelerate, brake, overtake and make turns. Mukadam, et al [8] employed deep reinforcement learning, with Q-mask technique, to learn a high-level policy for tactical decision making which they broke down into five actions, including no action, accelerate, decelerate, left lane change and right lane change. For driving control problems, a number of studies treated the control action space as discrete to simplify the problem or improve learning efficiency.…”
Section: Related Workmentioning
confidence: 99%
“…Firstly, the driving space is modeled with the road boundary, traffic lanes, position, speed, and acceleration of other vehicles in the feature space. Secondly, actions are defined as discrete behavior decisions [91,92] (lane changing, going straight, turning, etc.) or continuous control signal outputs [93,94] (steering angle and acceleration).…”
Section: The Driving Space In the Reinforcement Learning Decisionmentioning
confidence: 99%
“…As shown in Fig. 12 [91], this research is based on simulated feature space with the road boundary and vehicles as input, and behavior decisions as actions. Reinforcement learning relies on simulation since it needs to find the best driving policy by trial and error; therefore, it needs some adaption on real-road tests.…”
Section: The Driving Space In the Reinforcement Learning Decisionmentioning
confidence: 99%
See 1 more Smart Citation