2017
DOI: 10.48550/arxiv.1711.03156
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Learning Deep Mean Field Games for Modeling Large Population Behavior

Abstract: We consider the problem of representing collective behavior of large populations and predicting the evolution of a population distribution over a discrete state space. A discrete time mean field game (MFG) is motivated as an interpretable model founded on game theory for understanding the aggregate effect of individual actions and predicting the temporal evolution of population distributions. We achieve a synthesis of MFG and Markov decision processes (MDP) by showing that a special MFG is reducible to an MDP.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
3
1

Relationship

1
8

Authors

Journals

citations
Cited by 11 publications
(9 citation statements)
references
References 14 publications
(26 reference statements)
0
9
0
Order By: Relevance
“…[20] shows that under certain conditions, fictitious play converges to a MFE. [28] proposed a deep mean field game to model strategic interactions among the players. [18] proposed a modelfree non stationary algorithm to compute MFE.…”
Section: Related Literaturementioning
confidence: 99%
“…[20] shows that under certain conditions, fictitious play converges to a MFE. [28] proposed a deep mean field game to model strategic interactions among the players. [18] proposed a modelfree non stationary algorithm to compute MFE.…”
Section: Related Literaturementioning
confidence: 99%
“…Model-based approaches to the mean-field approximation require knowledge of model parameters (see, for example, Elliott et al, 2013;Mahajan, 2015, 2016;Li et al, 2017), while model-free methods (Yang et al, 2017 only come with algorithms with asymptotic analysis. In contrast, our method is model-free and comes with provable global non-asymptotic convergence analysis.…”
Section: Related Workmentioning
confidence: 99%
“…To scale up learning in MARL problem in the presence of a large number, even infinitely many agents, mean-field approximation has been explored to directly model the population behavior of the agents. Yang et al (2017) consider a mean-field game with deterministic linear state transitions, and show it can be reformulated as a mean-field MDP, with mean-field state residing in a finite-dimensional probability simplex. Yang et al (2018) take the mean-field approximation over actions, such that the interaction for any given agent and the population is approximated by the interaction between the agent's action and the averaged actions of its neighboring agents.…”
Section: Introductionmentioning
confidence: 99%