2018
DOI: 10.1139/cjce-2017-0408
|View full text |Cite
|
Sign up to set email alerts
|

Continuous residual reinforcement learning for traffic signal control optimization

Abstract: Traffic signal control can be naturally regarded as a reinforcement learning problem. Unfortunately, it is one of the most difficult classes of reinforcement learning problems owing to its large state space. A straightforward approach to address this challenge is to control traffic signals based on continuous reinforcement learning. Although they have been successful in traffic signal control, they may become unstable and fail to converge to near-optimal solutions. We develop adaptive traffic signal controller… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
9
1

Relationship

1
9

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 35 publications
0
11
0
Order By: Relevance
“…Zhang L drew a conclusion that extensive simulation results for the designed Shanghai simulation scenarios indicate that most of the observed counts match quite well with the traffic simulation volumes and demonstrate the potential of MATSIM for large-scale dynamic transport simulation [18]. Aslani M developed adaptive traffic signal controllers based on continuous residual reinforcement learning (CRL-TSC) that was more stable, and the best setup of the CRL-TSC leads to saving average travel time by 15% in comparison to an optimized fixed-time controller [19].…”
Section: Reinforcement Learning Trafficmentioning
confidence: 69%
“…Zhang L drew a conclusion that extensive simulation results for the designed Shanghai simulation scenarios indicate that most of the observed counts match quite well with the traffic simulation volumes and demonstrate the potential of MATSIM for large-scale dynamic transport simulation [18]. Aslani M developed adaptive traffic signal controllers based on continuous residual reinforcement learning (CRL-TSC) that was more stable, and the best setup of the CRL-TSC leads to saving average travel time by 15% in comparison to an optimized fixed-time controller [19].…”
Section: Reinforcement Learning Trafficmentioning
confidence: 69%
“…(Nuli and Mathew, 2013)), stability (e.g. (Aslani et al, 2018b)), speed of optimization (e.g. in Transfer Learning models (N. Xu et al, 2019)), state and/or action space manageability or generalizability (e.g.…”
Section: Methods' Contribution and Combinationmentioning
confidence: 99%
“…Aslani et al [10] introduced the actor-critic method to solve the problem of the trade-off between exploration of the traffic environment and exploitation of the knowledge already obtained. Aslani et al [11] developed adaptive traffic signal controllers based on continuous residual reinforcement learning to improve their stability. Jeon et al [12] suggested a novel artificial intelligence that only uses video images of an intersection; the image-based RL model outperformed both the actual operation of fixed signals and a fully actuated operation.…”
Section: Review Of the Literature On Signal Coordinationmentioning
confidence: 99%