Lecture Notes in Control and Information Sciences
DOI: 10.1007/bfb0009377
|View full text |Cite
|
Sign up to set email alerts
|

Martingale methods in stochastic control

Abstract: The martingale treatment of stochastic control problems is based on the idea that the correct formulation of Bellman's "principle of optiraality" for stochastic minimization problems is in terms of a submartingale inequality: the "value function" of dynamic programming is always a submartingale and is a martingale under a particular control strategy if and only if that strategy is optimal. Local conditions for optimality in the form of a minimum principle can be obtained by applying Meyer's submartingale decom… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
24
0
1

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 42 publications
(25 citation statements)
references
References 66 publications
(90 reference statements)
0
24
0
1
Order By: Relevance
“…Consider the Black-Scholes-Barenblatt equation 17) where ψ > 0 is a (small) parameter and λ ≤ 0 ≤ Λ are suitable functions. This equation corresponds to the problem of finding the smallest initial capital that allows to superreplicate the option G(S T ) for any volatility process evolving in the random interval…”
Section: Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Consider the Black-Scholes-Barenblatt equation 17) where ψ > 0 is a (small) parameter and λ ≤ 0 ≤ Λ are suitable functions. This equation corresponds to the problem of finding the smallest initial capital that allows to superreplicate the option G(S T ) for any volatility process evolving in the random interval…”
Section: Resultsmentioning
confidence: 99%
“…The central idea is to find an asymptotic solution to the Hamilton-Jacobi-BellmanIsaacs (henceforth HJBI) equation associated to the hedging problem and corresponding almost optimal controls. For the convenience of the reader, we provide a derivation of the HJBI equation which starts from a general sufficient criterion for optimality (Proposition 4.1) known as principle of optimality [17] or martingale optimality principle [60, V.15]; see also [61,Proposition 4.1] for a version of the martingale optimality principle in the context of a zero-sum game. After that, we explain how the HJBI equation together with an appropriate ansatz can be used to derive candidates for the value and the almost optimal controls of our hedging problems.…”
Section: General Procedures and The Case Of Theorem 34mentioning
confidence: 99%
See 1 more Smart Citation
“…Following Bellman, several works focus on finding conditions under which HJB equations have solutions (see survey in [14][15][16][17][18]). Establishing such conditions often limits the class of problems that can be handled by the dynamic programming approach [19].…”
Section: Optimal Solution Characterizationmentioning
confidence: 99%
“…Devido a grande dificuldade de tratar o caso não-Markoviano, as primeiras caracterizações iniciarão-se com o controle apenas no drift (função α da EDE (11), (16), (19)) e não no coeficiente de difusão (função σ da EDE).…”
unclassified