2022
DOI: 10.3390/su141912033
|View full text |Cite
|
Sign up to set email alerts
|

A Q-Learning-Based Approximate Solving Algorithm for Vehicular Route Game

Abstract: Route game is recognized as an effective method to alleviate Braess’ paradox, which generates a new traffic congestion since numerous vehicles obey the same guidance from the selfish route guidance (such as Google Maps). The conventional route games have symmetry since vehicles’ payoffs depend only on the selected route distribution but not who chose, which leads to the precise Nash equilibrium being able to be solved by constructing a special potential function. However, with the arrival of smart cities, the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 41 publications
0
1
0
Order By: Relevance
“…In this section, similar to our previous works [51], the Q‐learning‐based universal algorithm is adopted to approximate the Nash equilibrium of the route game. A Q‐learning algorithm consists of four elements—reward function, optimization rules, state set, and action set.…”
Section: Contributions On Fairness Mechanism‐based Cvrgmentioning
confidence: 99%
“…In this section, similar to our previous works [51], the Q‐learning‐based universal algorithm is adopted to approximate the Nash equilibrium of the route game. A Q‐learning algorithm consists of four elements—reward function, optimization rules, state set, and action set.…”
Section: Contributions On Fairness Mechanism‐based Cvrgmentioning
confidence: 99%