1979
DOI: 10.1007/978-1-4615-6746-2
|View full text |Cite
|
Sign up to set email alerts
|

Controlled Markov Processes

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

2
302
0
4

Year Published

1994
1994
2012
2012

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 468 publications
(320 citation statements)
references
References 0 publications
2
302
0
4
Order By: Relevance
“…When either the state set or action set is infinite, even -optimal policies may not exist for some > 0; Ross [25], Dynkin and Yushkevich [11, Chapter 7], Feinberg [12, §5]. For a finite-state set and compact action sets, optimal policies may not exist; Bather [2], Chitashvili [9], and Dynkin and Yushkevich [11, Chapter 7].For MDPs with finite-state and action sets, there exist stationary policies satisfying optimality equations (see Dynkin and Yushkevich [11, Chapter 7], where these equations are called canonical), and, furthermore, any stationary policy satisfying optimality equations is optimal. The latter is also true for MDPs with Borel state and action sets, if the value and weight (also called bias) functions are bounded; Dynkin and Yushkevich [11, Chapter 7].…”
mentioning
confidence: 99%
See 2 more Smart Citations
“…When either the state set or action set is infinite, even -optimal policies may not exist for some > 0; Ross [25], Dynkin and Yushkevich [11, Chapter 7], Feinberg [12, §5]. For a finite-state set and compact action sets, optimal policies may not exist; Bather [2], Chitashvili [9], and Dynkin and Yushkevich [11, Chapter 7].For MDPs with finite-state and action sets, there exist stationary policies satisfying optimality equations (see Dynkin and Yushkevich [11, Chapter 7], where these equations are called canonical), and, furthermore, any stationary policy satisfying optimality equations is optimal. The latter is also true for MDPs with Borel state and action sets, if the value and weight (also called bias) functions are bounded; Dynkin and Yushkevich [11, Chapter 7].…”
mentioning
confidence: 99%
“…For a finite-state set and compact action sets, optimal policies may not exist; Bather [2], Chitashvili [9], and Dynkin and Yushkevich [11, Chapter 7].…”
mentioning
confidence: 99%
See 1 more Smart Citation
“…Average cost optimality in the homogenous case has been extensively studied (see for example Putterman [16] Tijms [22], Federgruen and Tijms [6], Ross [17], and Derman [4]). The traditional approach to establishing existence of an average optimal policy is through an optimality equation that is satisfied by the relative value function under certain ergodicity conditions (see for example, Puterman [16], Dynkin and Yushkevich [5], Sennott in Feinberg and Shwartz [8]). Although the nonhomogeneous case is formally included within the homogeneous case by the device of augmenting the state variable with time (see for example Guo et al [9]), the resulting homogeneous MDP has a countably infinite state space which can pose severe analytical and algorithmic challenges.…”
mentioning
confidence: 99%
“…We should also note that our approach is restricted to finding optimal average cost policies among the class of all deterministic policies. This restriction can be important since it has been shown that nonrandomized strategies may be outperformed by randomized strategies in the case of the upper limit of average costs (see Dynkin and Yushkevich [5]) while in the case of the lower limit of average costs for a fixed initial state it is sufficient to consider nonrandomized policies (Feinberg [7]). We will return to this point later in the Discussion section of this paper.…”
mentioning
confidence: 99%