53rd IEEE Conference on Decision and Control 2014
DOI: 10.1109/cdc.2014.7039526
|View full text |Cite
|
Sign up to set email alerts
|

Weighted difference approximation of value functions for slow-discounting Markov Decision Processes

Abstract: Abstract-Modern applications of the theory of Markov Decision Processes (MDPs) often require frequent decision making, that is, taking an action every microsecond, second, or minute. Infinite horizon discount reward formulation is still relevant for a large portion of these applications, because actual time span of these problems can be months or years, during which discounting factors due to e.g. interest rates are of practical concern. In this paper, we show that, for such MDPs with discount rate α close to … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 11 publications
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?