2000
DOI: 10.1016/s0167-6911(99)00118-8
|View full text |Cite
|
Sign up to set email alerts
|

Infinite horizon risk sensitive control of discrete time Markov processes with small risk

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

2
25
0

Year Published

2005
2005
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 51 publications
(31 citation statements)
references
References 5 publications
2
25
0
Order By: Relevance
“…Note that the iteration in (3.10) already appears in Di Masi & Stettner (1999) p.68. However, there the authors do not consider a finite horizon problem.…”
Section: Finite Horizon Problemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Note that the iteration in (3.10) already appears in Di Masi & Stettner (1999) p.68. However, there the authors do not consider a finite horizon problem.…”
Section: Finite Horizon Problemsmentioning
confidence: 99%
“…Cavazos-Cadena & Hernández-Hernández (2011);Cavazos-Cadena & Fernández-Gaucherand (2000); Jaśkiewicz (2007); Di Masi & Stettner (1999)). The infinite horizon discounted classical risk-sensitive MDP and its relation to the average cost problem is considered in Di Masi & Stettner (1999). As far as applications are concerned, risk-sensitive problems can e.g.…”
Section: Introductionmentioning
confidence: 99%
“…As mentioned in Section 1, characterizations of the optimal λ-sensitive average cost for MDPs with denumerable or Borel state spaces via the λ-OE have recently been given in Borkar and Meyn (2002) and Di Masi and Stettner (1999), (2000), respectively. Extending Theorem 3.1 to the cases considered in those papers is an interesting problem.…”
Section: Nonstationary Value Iteration In Controlled Markov Chainsmentioning
confidence: 99%
“…In the past two decades, there has been a renewed interest in this type of cost criteria as, when the 'risk factor' is strictly positive, i.e., in the risk-averse case, the use of the exponential reduces the possibility of rare but devastating large excursions of the state process. Though this criterion has been studied extensively in the literature of Markov decision processes (see, e.g., Borkar and Meyn [13], Cavazos-Cadena and Fernandez-Gaucherand [14], Di Masi and Stettner [15,16,17], Fleming and Hernández-Hernández [21], Fleming and McEneaney [22], Hernández-Hernández and 2 Article submitted to Mathematics of Operations Research; manuscript no. MOR-2015-241 Marcus [24], Whittle [35,36]), the corresponding results on stochastic games seem to be limited (see e.g., Basar [3], El-Karoui and Hamadene [18], Jacobson [27], James et.…”
Section: Introductionmentioning
confidence: 99%
“…This change makes our analysis totally novel and substantially different from those in the existing literature. Also, in most of the existing literature in this domain see, e.g., Balaji and Meyn [2], Borkar and Meyn [13], Cavazos-Cadena and Fernandez-Gaucherand [14], Di 3 Masi and Stettner [16], and, Hernández-Hernández and Marcus [24], the 'risk factor' is assumed to be sufficiently small. We make an assumption, for the ergodic game only, on the smallness of the cost function as in Basu and Ghosh [4] which essentially implies that the 'risk factor' cannot be too large.…”
Section: Introductionmentioning
confidence: 99%