2009
DOI: 10.1007/978-3-540-89500-8
|View full text |Cite
|
Sign up to set email alerts
|

Continuous-time Stochastic Control and Optimization with Financial Applications

Abstract: PrefaceDynamic stochastic optimization is the study of dynamical systems subject to random perturbations, and which can be controlled in order to optimize some performance criterion. It arises in decision-making problems under uncertainty, and finds numerous and various applications in economics, management and finance.Historically handled with Bellman's and Pontryagin's optimality principles, the research on control theory has considerably developed over recent years, inspired in particular by problems emergi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

9
796
0
2

Year Published

2013
2013
2023
2023

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 849 publications
(807 citation statements)
references
References 0 publications
9
796
0
2
Order By: Relevance
“…The same proof as for Proposition 6.6.5 in [15] shows that . Moreover the optimal solution to (3.3) is now given by the optimal control to B(m * ), which by the above observation is π t (λ m * ).…”
Section: A2 General Results On the Mean-variance Hedging Problemmentioning
confidence: 64%
See 1 more Smart Citation
“…The same proof as for Proposition 6.6.5 in [15] shows that . Moreover the optimal solution to (3.3) is now given by the optimal control to B(m * ), which by the above observation is π t (λ m * ).…”
Section: A2 General Results On the Mean-variance Hedging Problemmentioning
confidence: 64%
“…We claim that E F u, T j e Yu F t = F t, T j )e mj(u−t)+n1(u−t)Yt+n2(u−t)Xt , t ≤ u, (A. 15) for suitable deterministic functions m j (τ ), n 1 (τ ), n 2 (τ ) with m j (0) = n 2 (0) = 0 and n 1 (0) = 1. Indeed, applying Itô's formula to M jb) Plugging M j t (u) = F t, T j q j (t, u) into (A.17) and then using (A.16), we obtain dM j t (u) = q j (t, u) dF t, T j + F t, T j q j (t, u) Applying Itô's formula here and using that V t (λ) and M j t (u) areP -martingales, it follows that dV t (λ) = q j (t, u)du dF t, T j + C 0 t dŴ 0 t + C 1 t dŴ 1 t + dL t with aP -local martingale L t orthogonal to F (t).…”
Section: A2 General Results On the Mean-variance Hedging Problemmentioning
confidence: 99%
“…The conditions and proof are provided in Pham (2009). The main difference between the solutions provided by the two approaches is that the HJB equation gives us a forwardbackwards partial differential equation, while the Pontryagin maximum principle gives us a forward-backwards stochastic differential equation.…”
Section: Connection Between Hjb and Pontryagin Maximum Principlementioning
confidence: 99%
“…Its proof is based on the Hamilton-Jacobi-Bellman verification theorem (see [8,Theorem 3.1]) and is rather standard, so we give only a sketch of it. More information on stochastic control theory can be found, for example, in [3] and [10].…”
Section: Solution By the Hamilton-jacobi-bellman Verification Theoremmentioning
confidence: 99%