2019 International Joint Conference on Neural Networks (IJCNN) 2019
DOI: 10.1109/ijcnn.2019.8852110
|View full text |Cite
|
Sign up to set email alerts
|

Lane Change Decision-making through Deep Reinforcement Learning with Rule-based Constraints

Abstract: Autonomous driving decision-making is a great challenge due to the complexity and uncertainty of the traffic environment. Combined with the rule-based constraints, a Deep Q-Network (DQN) based method is applied for autonomous driving lane change decision-making task in this study. Through the combination of high-level lateral decision-making and lowlevel rule-based trajectory modification, a safe and efficient lane change behavior can be achieved. With the setting of our state representation and reward functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
39
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 107 publications
(39 citation statements)
references
References 22 publications
0
39
0
Order By: Relevance
“…Udacity, used in [49], is a simulator that was built for Udacity's Self-Driving Car Nanodegree [50] provides various sensors, such as high quality rendered camera image LIDAR and Infrared information, and also has capabilities to model other traffic participants.…”
Section: B Simulatorsmentioning
confidence: 99%
See 1 more Smart Citation
“…Udacity, used in [49], is a simulator that was built for Udacity's Self-Driving Car Nanodegree [50] provides various sensors, such as high quality rendered camera image LIDAR and Infrared information, and also has capabilities to model other traffic participants.…”
Section: B Simulatorsmentioning
confidence: 99%
“…[80] is to fed multiple consecutive traffic snapshots into the underlying CNN structure, which inherently extracts the velocity of the moving objects. Representing speed in grid cells is also possible in this setup, for that example can be found in [49], where the authors converted the traffic extracted from the Udacity simulator to the lane-based grid.…”
Section: E Observation Spacementioning
confidence: 99%
“…Another group of researches focus on strategic decisions, where the agent determines high-level actions, such as lane-change, follow, etc. These researches usually use microscopic simulations for the environment, such as Vissim [23], Udacity, [24], SUMO [25], or several self-made models [26]. Though hybrid solutions exist, where the strategic and direct control meets [27], only a few papers deal with defining a path by some geometric approach an RL and then drive through it with a controller [28], [29].…”
Section: A Related Workmentioning
confidence: 99%
“…The model formulated a set of LC rules which mainly considered the lane utility and LC risk. Wang et al [3] proposed a reinforcement learning method based on the LC rules, which was used to provide support for autonomous driving decision. This model mainly considered the safety of self-driving vehicles.…”
Section: Related Workmentioning
confidence: 99%
“…Currently, many methods have been presented to predict LCD (e.g. rule-based [1]- [3], fuzzy reasoning [4]- [6], datadriven algorithm [7]- [10], etc.). However, these studies focused on modeling the surrounding traffic conditions on LC vehicles, and lacked consideration of DP and DS.…”
Section: Introductionmentioning
confidence: 99%