2021
DOI: 10.48550/arxiv.2110.02552
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Policy iteration method for time-dependent Mean Field Games systems with non-separable Hamiltonians

Abstract: We introduce two algorithms based on a policy iteration method to numerically solve time-dependent Mean Field Game systems of partial differential equations with non-separable Hamiltonians. We prove the convergence of such algorithms in sufficiently small time intervals with Banach fixed point method. Moreover, we prove that the convergence rates are linear. We illustrate our theoretical results by numerical examples, and we discuss the performance of the proposed algorithms.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(8 citation statements)
references
References 47 publications
0
8
0
Order By: Relevance
“…Some numerical examples have been considered in [19] for comparing the policy iteration method and the Newton method for solving MFGs. In many of these the policy iteration method turns out to be more efficient in terms of computing time.…”
Section: A Rate Of Convergence For the Policy Iteration Method: The E...mentioning
confidence: 99%
See 2 more Smart Citations
“…Some numerical examples have been considered in [19] for comparing the policy iteration method and the Newton method for solving MFGs. In many of these the policy iteration method turns out to be more efficient in terms of computing time.…”
Section: A Rate Of Convergence For the Policy Iteration Method: The E...mentioning
confidence: 99%
“…Despite the previous limitations, however, the policy iteration method retains the advantage of replacing the resolution of a strongly coupled nonlinear system with a sequence of decoupled linear problems. Moreover, in a neighborhood of the solution, the rapid convergence of the value function u (n) is also reflected in an equally rapid convergence of the distribution m (n) (see [9] and [19] for some numerical simulations).…”
Section: Introductionmentioning
confidence: 98%
See 1 more Smart Citation
“…Among the numerical methods proposed for mean-field control problems, we refer to [21] for a policy gradient-type method where feedback controls are approximated by neural networks and optimised for a given objective function; to [32] and again to [21] for a mean-field FBSDE method, generalising the deep BSDE method to mean-field dependence and in the former case to delayed effects; to [48] for a hybrid model where the mean-field distribution is approximated by a particle system and the control is obtained by numerical approximation of a PDE; and to [6] for a survey of methods for the coupled PDE systems, mainly in the spirit of the seminal works [3,2,5]; see also a related semi-Lagrangian scheme in [16]; a gradient method and penalisation approach in [44]; and a recent analysis of policy iteration in [38].…”
Section: Numerics For Mean-field Control Problemsmentioning
confidence: 99%
“…A similar iterative procedure has been analysed concurrently with this work in [38] for the classical mean-field game setting. The authors there propose a policy iteration where iterative approximation of the optimal control, as in the context of standard control problems, is intertwined with the iteration over the forward density, and prove convergence locally in time.…”
Section: Numerical Solutionmentioning
confidence: 99%