2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9206820
|View full text |Cite
|
Sign up to set email alerts
|

IEDQN: Information Exchange DQN with a Centralized Coordinator for Traffic Signal Control

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 24 publications
0
8
0
Order By: Relevance
“…POMDPs are useful for when the same state may be observed differently by agents depending on some randomness. Works in RL-based TSC that use a POMDP framework [29,34,35] either use observations to represent local intersection states in a road network (e.g., [34]), or to represent incomplete inputs from detectors (as is done by [35] for connected vehicle data).…”
Section: Partially Observable Mdpsmentioning
confidence: 99%
“…POMDPs are useful for when the same state may be observed differently by agents depending on some randomness. Works in RL-based TSC that use a POMDP framework [29,34,35] either use observations to represent local intersection states in a road network (e.g., [34]), or to represent incomplete inputs from detectors (as is done by [35] for connected vehicle data).…”
Section: Partially Observable Mdpsmentioning
confidence: 99%
“…In [76], Donghan et al embedded SADRL in a centralized agent, which is the centralized controller, to coordinate the traffic phases of distributed agents, which are traffic lights (A.2), based on traffic conditions in order to manage the congestion level. The agents maximize their individual rewards in a competitive (X.2.1) manner.…”
Section: Donghan's Sadrl Approach For a Hierarchical Multi-agent Environmentmentioning
confidence: 99%
“…Similarly to [33,36,76,79], in [83], Tian et al embedded MADRL in distributed agents, which are the traffic lights (A.2), to enhance their exploration strategy. A multi-agent environment is collaborative (X.2.2) in nature.…”
Section: Tian's Madrl Approach With Bootstrapping In a Multi-agent Environmentmentioning
confidence: 99%
See 1 more Smart Citation
“…The number of hidden layers in a DNN can improve learning ability and task performance. Deep Reinforcement Learning (DRL) technique [5][6][7] is one such technique that has improved significantly with the implementation of DNN, which has been used in various applications such as autonomous voltage control for power grid operations [8,9], battery management system [10], network traffic signal control [11] and human-machine collaborations [12] One of the commonly used DRL is Deep Q-network (DQN), which approximates Q-value function using DNN [13,14]. In this paper, DQN is used to train the agent of the autonomous self-driving vehicle in two simulated environments.…”
Section: Introductionmentioning
confidence: 99%