2017
DOI: 10.14736/kyb-2017-1-0082
|View full text |Cite
|
Sign up to set email alerts
|

Markov decision processes with time-varying discount factors and random horizon

Abstract: This paper is related to Markov Decision Processes. The optimal control problem is to minimize the expected total discounted cost, with a non-constant discount factor. The discount factor is time-varying and it could depend on the state and the action. Furthermore, it is considered that the horizon of the optimization problem is given by a discrete random variable, that is, a random horizon is assumed. Under general conditions on Markov control model, using the dynamic programming approach, an optimality equat… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2019
2019
2019
2019

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 24 publications
(38 reference statements)
0
1
0
Order By: Relevance
“…Cao (2018) proves the existence of sequential and recursive competitive equilibria in incomplete markets with aggregate shocks in which agents also have state-dependent discount factors. In the mathematics literature, Wei and Guo (2011), Carmon and Shwartz (2009), Minjárez-Sosa (2015), Ilhuicatzi-Roldán et al (2017) and González-Sánchez et al (2019) all address various issues in dynamic programming with state-dependent discounting However, these papers assume that the discount factor process in the dynamic program is bounded above by some constant b such that b < 1. This is too strict for a range of valuable applications, as discussed above.…”
mentioning
confidence: 99%
“…Cao (2018) proves the existence of sequential and recursive competitive equilibria in incomplete markets with aggregate shocks in which agents also have state-dependent discount factors. In the mathematics literature, Wei and Guo (2011), Carmon and Shwartz (2009), Minjárez-Sosa (2015), Ilhuicatzi-Roldán et al (2017) and González-Sánchez et al (2019) all address various issues in dynamic programming with state-dependent discounting However, these papers assume that the discount factor process in the dynamic program is bounded above by some constant b such that b < 1. This is too strict for a range of valuable applications, as discussed above.…”
mentioning
confidence: 99%