2020
DOI: 10.3390/info11020077
|View full text |Cite
|
Sign up to set email alerts
|

Deep Reinforcement Learning Based Left-Turn Connected and Automated Vehicle Control at Signalized Intersection in Vehicle-to-Infrastructure Environment

Abstract: In order to solve the problem of vehicle delay caused by stops at signalized intersections, a micro-control method of a left-turning connected and automated vehicle (CAV) based on an improved deep deterministic policy gradient (DDPG) is designed in this paper. In this paper, the micro-control of the whole process of a left-turn vehicle approaching, entering, and leaving a signalized intersection is considered. In addition, in order to solve the problems of low sampling efficiency and overestimation of the crit… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
9
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…Reinforcement learning-based adaptive traffic signal control changes traffic signals based on the feedback from the traffic demand, which can be hypothetical dynamic [ 41 , 42 , 43 , 44 ] or based on real-world data [ 45 , 46 ]. The existing literature on using the reinforcement learning approach can be categorised into two groups: networks consisting of CVs [ 44 , 47 , 48 ] and non-CVs environments [ 45 , 46 , 49 ]. Moreover, two general classifications (i.e., vehicle positions and queue length) are available for state representation.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Reinforcement learning-based adaptive traffic signal control changes traffic signals based on the feedback from the traffic demand, which can be hypothetical dynamic [ 41 , 42 , 43 , 44 ] or based on real-world data [ 45 , 46 ]. The existing literature on using the reinforcement learning approach can be categorised into two groups: networks consisting of CVs [ 44 , 47 , 48 ] and non-CVs environments [ 45 , 46 , 49 ]. Moreover, two general classifications (i.e., vehicle positions and queue length) are available for state representation.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, all of the papers relevant to this topic have used simulation platforms in order to obtain their desired results. SUMO [ 43 , 44 , 49 , 50 ], VISSIM [ 46 , 47 ], AIMSUN [ 45 ], and PARAMICS [ 30 ] are the most common software packages in which the combination of traffic simulation and RL can be executed appropriately. Finally, it is worth mentioning that all the previous papers took the signal controller as an agent for their RL algorithm except [ 47 ], which used connected vehicles as its agents.…”
Section: Related Workmentioning
confidence: 99%
“…For example, Tan et al [26] used the DRL for large-scale adaptive traffic signal control (ATSC). Chen et al [27] applied DRL to left turn CAVs at a signalized intersection. Kim and Jeong [28] applied DRL to control multiple signalized intersections.…”
Section: Introductionmentioning
confidence: 99%
“…Navigation and traffic jam warning systems also show great potential for the use of low-frequency methods [22] and low-frequency sampling methods can be used in monitoring weather parameters [23].…”
Section: Introductionmentioning
confidence: 99%