Wiley Encyclopedia of Operations Research and Management Science 2011
DOI: 10.1002/9780470400531.eorms0042
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Dynamic Programming I: Modeling

Abstract: The first step in solving a stochastic optimization problem is providing a mathematical model. How the problem is modeled can impact the solution strategy. In this article, we provide a flexible modeling framework that uses a classic control‐theoretic framework, avoiding devices such as one‐step transition matrices. We describe the five fundamental elements of any stochastic, dynamic program. Different notational conventions are introduced, and the types of policies that can be used to guide decisions are desc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2012
2012
2019
2019

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 32 publications
(23 reference statements)
0
10
0
Order By: Relevance
“…Note that this approach is different to the one commonly taken in reservoir operation optimization studies, where releases in discrete time periods from the reservoirs are considered as decision variables. In fact, when using stage differences as control variables they are treated as parameters to be optimized, an approach that is sometimes called optimization of myopic policies [25].…”
Section: Initial Model-based Optimization: 1d-sa With Nsga-iimentioning
confidence: 99%
“…Note that this approach is different to the one commonly taken in reservoir operation optimization studies, where releases in discrete time periods from the reservoirs are considered as decision variables. In fact, when using stage differences as control variables they are treated as parameters to be optimized, an approach that is sometimes called optimization of myopic policies [25].…”
Section: Initial Model-based Optimization: 1d-sa With Nsga-iimentioning
confidence: 99%
“…a Poisson process. Since we no longer have deterministic information about future arrivals and only the past is fully observable for us, we need to solve a stochastic shortest path problem: A state variable is the minimal history that is necessary and sufficient to compute the decision function [24]. For our case, this includes all the decisions made in the past that can affect the current load of the system and the ZIC power profile in (21).…”
Section: B Scenario IImentioning
confidence: 99%
“…ADP is a modeling and algorithmic framework for stochastic optimization problems [14]. In this framework, the multitime period optimization is viewed as Markov decision process (MDP).…”
Section: Introductionmentioning
confidence: 99%