In this paper, we study a linear-quadratic optimal control problem for mean-field stochastic differential equations driven by a Poisson random martingale measure and a one-dimensional Brownian motion. Firstly, the existence and uniqueness of the optimal control is obtained by the classic convex variation principle. Secondly, by the duality method, the optimality system, also called the stochastic Hamilton system which turns out to be a linear fully coupled mean-field forward-backward stochastic differential equation with jumps, is derived to characterize the optimal control. Thirdly, applying a decoupling technique, we establish the connection between two Riccati equations and the stochastic Hamilton system and then prove the optimal control has a state feedback representation.
The paper is concerned with optimal control of backward stochastic differential equation (BSDE) driven by Teugel's martingales and an independent multidimensional Brownian motion, where Teugel's martingales are a family of pairwise strongly orthonormal martingales associated with Lévy processes (see Nualart and Schoutens [14]). We derive the necessary and sufficient conditions for the existence of the optimal control by means of convex variation methods and duality techniques. As an application, the optimal control problem of linear backward stochastic differential equation with a quadratic cost criteria (called backward linear-quadratic problem, or BLQ problem for short) is discussed and characterized by stochastic Hamilton system.
The paper is concerned with a stochastic optimal control problem where the controlled systems are driven by Teugel's martingales and an independent multi-dimensional Brownian motion. Necessary and sufficient conditions for an optimal control of the control problem with the control domain being convex are proved by the classical method of convex variation, and the coefficients appearing in the systems are allowed to depend on the control variables. As an application, the linear quadratic stochastic optimal control problem is studied.
We study a class of stochastic evolution equations of jump type with random coefficients and its optimal control problem. There are three major ingredients. The first is to prove the existence and uniqueness of the solutions by continuous dependence theorem of solutions combining with the parameter extension method. The second is to establish the stochastic maximum principle and verification theorem for our optimal control problem by the classic convex variation method and dual technique. The third is to represent an example of a Cauchy problem for a controlled stochastic partial differential equation with jumps which our theoretical results can solve.
This paper first makes an attempt to investigate the near-optimal control of systems governed by fully nonlinear coupled forward-backward stochastic differential equations (FBSDEs) under the assumption of a convex control domain. By Ekeland’s variational principle and some basic estimates for state processes and adjoint processes, we establish the necessary conditions for anyε-near optimal control in a local form with an error order of exactε1/2. Moreover, under additional convexity conditions on Hamiltonian function, we prove that anε-maximum condition in terms of the Hamiltonian in the integral form is sufficient for near-optimality of orderε1/2.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.