2017
DOI: 10.1007/s13235-017-0227-5
|View full text |Cite
|
Sign up to set email alerts
|

Tauberian Theorem for Value Functions

Abstract: For two-person dynamic zero-sum games (both discrete and continuous settings), we investigate the limit of value functions of finite horizon games with long run average cost as the time horizon tends to infinity and the limit of value functions of λ -discounted games as the discount tends to zero. We prove that the Dynamic Programming Principle for value functions directly leads to the Tauberian Theorem-that the existence of a uniform limit of the value functions for one of the families implies that the other … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
11
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
6
2
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 47 publications
0
11
0
Order By: Relevance
“…The ergodic case, when these limits are constants (that is, when they do not depend on the initial condition y 0 ), was studied, for example, in [3,5,7,17] (see also references therein). Results for the non-ergodic case were obtained in [12,22,23,28,31,32,33]. In particular, it was results of [12] that were instrumental for obtaining the IDLP representation for the aforementioned limits for systems evolving in continuous time in [10].…”
Section: Introductionmentioning
confidence: 99%
“…The ergodic case, when these limits are constants (that is, when they do not depend on the initial condition y 0 ), was studied, for example, in [3,5,7,17] (see also references therein). Results for the non-ergodic case were obtained in [12,22,23,28,31,32,33]. In particular, it was results of [12] that were instrumental for obtaining the IDLP representation for the aforementioned limits for systems evolving in continuous time in [10].…”
Section: Introductionmentioning
confidence: 99%
“…If a function ϕ is even, then 3 4 ϕ(yu) + 1 4 ϕ(−yu) − ϕ(y) ≡ 0 (since u is either equal to 1 or to −1). Therefore, (29) is satisfied for all γ ∈ P(G), while (30) Thus, the constraints (30) ensure that the occupational measures γ generated by the state-control trajectories satisfy the property γ(Y \ Y y0 ) = 0. This is consistent with the system's dynamics (see (24)), according to which the only states attended by the state trajectories are y 0 and −y 0 .…”
Section: Examplementioning
confidence: 99%
“…In this paper, we study asymptotic properties of problems of control of stochastic discrete time systems with time averaging and time discounting optimality criteria, and we establish that the Cesàro and Abel limits of the optimal values in such problems can be evaluated with the help of a certain infinite-dimensional (ID) linear programming (LP) problem and its dual. Note that matters related to the existence and the equality of such limits have been investigated by many; see, e.g., [1], [2], [6], [8], [14], [18], [21], [22], [30], [33], [34], [37], [41]. A distinct feature of the present paper is that the Cesàro and Abel limits of the optimal values are evaluated with the help of LP tools.…”
Section: Introductionmentioning
confidence: 99%
“…(called Cesàro and Abel limits, respectively). Matters related to the existence and the equality of Cesàro and Abel limits of the optimal values have been addressed by many authors (see, e.g., [1,5,6,11,16,17,22,24,25,26]). A special feature and the novelty of our consideration is that we are making use of occupational measure reformulations of problems (3) and ( 4) and utilize the LP duality theory.…”
Section: Inroductionmentioning
confidence: 99%