This paper refines the necessary optimality conditions for uniformly overtaking optimal control on infinite horizon in the free end case. This condition is applicable to general non-stationary systems and the optimal objective value is not necessarily finite. In the papers of S.M. Aseev, A.V. Kryazhimskii, V.M. Veliov, K.O. Besov there was suggested a boundary condition for equations of the Pontryagin Maximum Principle. Each optimal process corresponds to a unique solution satisfying the boundary condition. Following A. Seierstad's idea, in this paper we prove a more general geometric version of that boundary condition. We show that this condition is necessary for uniformly overtaking optimal control on infinite horizon in the free end case. A number of assumptions under which this condition selects a unique Lagrange multiplier is obtained. Some examples are discussed.
We investigate necessary conditions of optimality for the Bolza-type infinite horizon problem with free right end. The optimality is understood in the sense of weakly uniformly overtaking optimal control. No previous knowledge in the asymptotic behaviour of trajectories or adjoint variables is necessary. Following Seierstads idea, we obtain the necessary boundary condition at infinity in the form of a transversality condition for the maximum principle. Those transversality conditions may be expressed in the integral form through an Aseev-Kryazhimskii-type formulae for co-state arcs. The connection between these formulae and limiting gradients of payoff function at infinity is identified; several conditions under which it is possible to explicitly specify the co-state arc through those Aseev-Kryazhimskii-type formulae are found. For infinite horizon problem of Bolza type, an example is given to clarify the use of the Aseev-Kryazhimskii formula as explicit expression of the co-state arc.Keywords: Optimal control; Problem of Bolza type; Infinite horizon problem; transversality condition for infinity; Uniformly overtaking optimal control; Limiting subdifferential; Unbounded Cost; Shadow prices 49K15; 49J52; 91B62The first necessary conditions of optimality for infinite-horizon control problems were proved [28] on the verge of 1950-60s by L.S. Pontryagin and his associates (for the problems with the right end fixed at infinity). Only later [19] was the Maximum Principle proved for a reasonably broad class of problems, and yet the transversality-type conditions at infinity were not provided. A significant number [19,21,27,34,32] of such conditions was proposed. Thus, the Maximum Principle for infinite horizon was not complete, and the set of extremals obtained through it was too broad; see [14,19,27,33], [2, Sect. 6], [30, Example 10.2].The principal obstacle on the way to transversality conditions at infinity is the fact that it is necessary to find the asymptotic conditions on the adjoint equation (i.e., on the linear system) that would be satisfied by at least one but not by all of its solutions. It was first done in [6] for linear autonomous control system through passing to a functional space that allowed to extend 1
For two-person dynamic zero-sum games (both discrete and continuous settings), we investigate the limit of value functions of finite horizon games with long run average cost as the time horizon tends to infinity and the limit of value functions of 位 -discounted games as the discount tends to zero. We prove that the Dynamic Programming Principle for value functions directly leads to the Tauberian Theorem-that the existence of a uniform limit of the value functions for one of the families implies that the other one also uniformly converges to the same limit. No assumptions on strategies are necessary. To this end, we consider a mapping that takes each payoff to the corresponding value function and preserves the sub-and super-optimality principles (the Dynamic Programming Principle).With their aid, we obtain certain inequalities on asymptotics of sub-and super-solutions, which lead to the Tauberian Theorem. In particular, we consider the case of differential games without relying on the existence of the saddle point; a very simple stochastic game model is also considered.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations鈥揷itations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright 漏 2024 scite LLC. All rights reserved.
Made with 馃挋 for researchers
Part of the Research Solutions Family.