2020 International Joint Conference on Neural Networks (IJCNN) 2020
DOI: 10.1109/ijcnn48605.2020.9207446
|View full text |Cite
|
Sign up to set email alerts
|

An Improved Minimax-Q Algorithm Based on Generalized Policy Iteration to Solve a Chaser-Invader Game

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 14 publications
0
1
0
Order By: Relevance
“…Benefitting from the rapid development of RL, it was gradually applied to solve multi-agent system problems. At first, game theory was introduced into MAS and the concept of learning was proposed, which was called the minimax-Q algorithm [28]. Based on this algorithm, many derivative types were studied, such as Nash Q-learning [29], which was mainly used to solve zero-sum differential games.…”
Section: Introductionmentioning
confidence: 99%
“…Benefitting from the rapid development of RL, it was gradually applied to solve multi-agent system problems. At first, game theory was introduced into MAS and the concept of learning was proposed, which was called the minimax-Q algorithm [28]. Based on this algorithm, many derivative types were studied, such as Nash Q-learning [29], which was mainly used to solve zero-sum differential games.…”
Section: Introductionmentioning
confidence: 99%