We explore reinforcement learning methods for finding the optimal policy in the linear quadratic regulator (LQR) problem. In particular we consider the convergence of policy gradient methods in the setting of known and unknown parameters. We are able to produce a global linear convergence guarantee for this approach in the setting of finite time horizon and stochastic state dynamics under weak assumptions. The convergence of a projected policy gradient method is also established in order to handle problems with constraints. We illustrate the performance of the algorithm with two examples. The first example is the optimal liquidation of a holding in an asset. We show results for the case where we assume a model for the underlying dynamics and where we apply the method to the data directly. The empirical evidence suggests that the policy gradient method can learn the global optimal solution for a larger class of stochastic systems containing the LQR framework, and that it is more robust with respect to model misspecification when compared to a model-based approach. The second example is an LQR system in a higher dimensional setting with synthetic data.
In this paper we formulate and analyze an N -player stochastic game of the classical fuel follower problem and its Mean Field Game (MFG) counterpart. For the N -player game, we obtain the Nash Equilibrium (NE) explicitly by deriving and analyzing a system of Hamilton-Jacobi-Bellman (HJB) equations, and by establishing the existence of a unique strong solution to the associated Skorokhod problem on an unbounded polyhedron with an oblique reflection. For the MFG, we derive a bang-bang type NE under some mild technical conditions and by the viscosity solution approach. We also show that this solution is an -NE to the N -player game, with = O( 1 √ N ). The N -player game and the MFG differ in that the NE for the former is state dependent while the NE for the latter is threshold-type bang-bang policy where the threshold is state independent . Our analysis shows that the NE for a stationary MFG may not be the NE for the corresponding MFG.
This paper presents a general mean-field game (GMFG) framework for simultaneous learning and decision making in stochastic games with a large population. It first establishes the existence of a unique Nash equilibrium to this GMFG, and it demonstrates that naively combining reinforcement learning with the fixed-point approach in classical mean-field games yields unstable algorithms. It then proposes value-based and policy-based reinforcement learning algorithms (GMF-V and GMF-P, respectively) with smoothed policies, with analysis of their convergence properties and computational complexities. Experiments on an equilibrium product pricing problem demonstrate that two specific instantiations of GMF-V with Q-learning and GMF-P with trust region policy optimization—GMF-V-Q and GMF-P-TRPO, respectively—are both efficient and robust in the GMFG setting. Moreover, their performance is superior in convergence speed, accuracy, and stability when compared with existing algorithms for multiagent reinforcement learning in the N-player setting.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.