In this work we consider an L ∞ minimax ergodic optimal control problem with cumulative cost. We approximate the cost function as a limit of evolutions problems. We present the associated Hamilton-Jacobi-Bellman equation and we prove that it has a unique solution in the viscosity sense. As this HJB equation is consistent with a numerical procedure, we use this discretization to obtain a procedure for the primitive problem. For the numerical solution of the ergodic version we need a perturbation of the instantaneous cost function. We give an appropriate selection of the discretization and penalization parameters to obtain discrete solutions that converge to the optimal cost. We present numerical results.
The continuous problemMinimax optimal control problems have been intensely studied in recent years. In particular, the infinite control problems with accumulative cost are studied in Alvarez and Barron (2000) [2]. Considering these costs, we arrive to ergodic problems. We consider the problem: minimize the functional cost} where h and g are given functions, α is a control and the trajectory y(·) is given by the ordinary differential equationThe optimal cost is defined as U (x) = inf α∈A J(x, α), but it has few properties of regularity. As in [1] we study U (x) = sup t u(t, x),We develop a numerical procedure to obtain approximations of the function U . This procedure consists in reducing the original problem into an optimization problem on a deterministic Markov chain. This discrete optimum is computed employing an algorithm which converges in a finite numbers of steps.Properties of the continuous problem Under the assumptions: A compact subset of R m , f, g, h : R r × A → R are continuous functions, Lipschitz-continuous and Z r periodic in the x variable and the dynamical system is controllable, Alvarez and Barron proved in [2] results in terms of the long run average cost λ = inf α lim inf t→∞In [1] is proved that: U is Lipschitz continuous, satisfies the Hamilton-Jacobi-Bellman equationThe penalized problem For the case λ = 0, we cannot use a direct discretization of equation because we may arrive to the discrete solution U k = +∞. To skip this difficulty, we define the ε−penalized problems:With this penalization we obtain, for λ ≤ 0, an analogous problem to that one with λ ε < 0. And the following properties hold:
The discretized problemWe discretize the set R r /Z r by using a family of set Ω k (quasi-uniform triangulations of size k of the torus ), we call N k the cardinal of this set. We consider A a finite set. The Hamilton-Jacobi-Bellman equation for U helps us to bring the following discretization scheme. For each x ∈ Ω k and for each control a, we replace de target point associated to the control a, i.e. the point x + √ k f (x, a) , by the set *