2014
DOI: 10.1155/2014/201491
|View full text |Cite
|
Sign up to set email alerts
|

The Relationship between the Stochastic Maximum Principle and the Dynamic Programming in Singular Control of Jump Diffusions

Abstract: The main objective of this paper is to explore the relationship between the stochastic maximum principle (SMP in short) and dynamic programming principle (DPP in short), for singular control problems of jump diffusions. First, we establish necessary as well as sufficient conditions for optimality by using the stochastic calculus of jump diffusions and some properties of singular controls. Then, we give, under smoothness conditions, a useful verification theorem and we show that the solution of the adjoint equa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 29 publications
0
3
0
Order By: Relevance
“…Since the relationship between these two approaches is the one between the derivatives of the value function and the adjoint processes along the optimal state, or actually the one between HJB equations and stochastic Hamiltonian systems, and more generally, the one between PDEs and SDEs. For recent development of the relationship between dynamic programming and maximum principle for stochastic optimal control problems (without delay but including jump diffusions, Markov switching, singular control, or FBSDE systems), refer to Framstad et al [15], Shi and Wu [34], Donnelly [9], Zhang et al [42], Bahlali et al [4], Shi and Yu [35], Chighoub and Mezerdi [8]. Thereby, it is natural to ask the question: Are there any relations between these two extensively used and important approaches, for stochastic optimal control problems with time delay?…”
mentioning
confidence: 99%
“…Since the relationship between these two approaches is the one between the derivatives of the value function and the adjoint processes along the optimal state, or actually the one between HJB equations and stochastic Hamiltonian systems, and more generally, the one between PDEs and SDEs. For recent development of the relationship between dynamic programming and maximum principle for stochastic optimal control problems (without delay but including jump diffusions, Markov switching, singular control, or FBSDE systems), refer to Framstad et al [15], Shi and Wu [34], Donnelly [9], Zhang et al [42], Bahlali et al [4], Shi and Yu [35], Chighoub and Mezerdi [8]. Thereby, it is natural to ask the question: Are there any relations between these two extensively used and important approaches, for stochastic optimal control problems with time delay?…”
mentioning
confidence: 99%
“…As mentioned above, two problems must be considered. The first is optimizing this system by invoking Bellman dynamic programming theorem, Pontryagain's stochastic optimization theorem, and Fleming theorem, then transferring the optimal problem to a corresponding Hamilton-Jacobi-Isaacs equation, which has been resolved respectively by Chighoub et al [10], Guo et al [11], Gomoyunov [12]. The second one is constructing a payoff distribution procedure for all agents [13,14].…”
Section: Behavior and The Equilibrium Of The Agent In Multi-local-wor...mentioning
confidence: 99%
“…See also the recent paper by 鈭卥sendal and Sulem [29], where Malliavin calculus techniques have been used to define the adjoint process. The relationship between the stochastic maximum principle and dynamic programming has been investigated in [5,15]. See also [28] for some worked examples.…”
Section: Introductionmentioning
confidence: 99%