2022
DOI: 10.1016/j.jmaa.2022.126138
|View full text |Cite
|
Sign up to set email alerts
|

Rates of convergence for the policy iteration method for Mean Field Games systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5

Relationship

1
4

Authors

Journals

citations
Cited by 9 publications
(1 citation statement)
references
References 21 publications
0
1
0
Order By: Relevance
“…To design implicit finite difference schemes, iterative methods are needed to reduce the problem to a sequence of linear systems. Iterative methods employed in solving MFGs include Newton's method [6,7,9,28], fixed point iteration, fictitious play, policy iteration [15,29], smoothed policy iteration [34], etc. In particular, numerical solution of MFGs with non-separable Hamiltonian have been discussed in, e.g., [6,7,23,28,29].…”
Section: Introductionmentioning
confidence: 99%
“…To design implicit finite difference schemes, iterative methods are needed to reduce the problem to a sequence of linear systems. Iterative methods employed in solving MFGs include Newton's method [6,7,9,28], fixed point iteration, fictitious play, policy iteration [15,29], smoothed policy iteration [34], etc. In particular, numerical solution of MFGs with non-separable Hamiltonian have been discussed in, e.g., [6,7,23,28,29].…”
Section: Introductionmentioning
confidence: 99%