OverviewMany scientifically relevant problems share two salient features. First, they are dynamic; second, they involve uncertainty. Hence, it is natural that researchers would be concerned with the study of events taking place in stochastic dynamic settings. In these settings, the decision-maker will typically want to select optimal paths for an array of control variables in order to maximize or minimize the current value of a sequence of future expected outcomes. In this article, we defend the argument that exploring techniques and applications in the field of stochastic optimal control theory is vital for the advancement of applied science. Although solid steps have been taken, in the last few years, to consolidate the theory of stochastic controls and to make this an adequate tool to address many important problems in multiple fields of knowledge, further work is still necessary to accomplish new insights and to unveil new results in an area of extreme complexity, where the search for efficient paths is many times hampered by the high degree of underlying uncertainty.
The benchmark optimization problemStochastic optimal control problems are concerned with the intertemporal optimization (maximization or minimization) of an objective function subject to one or more constraints that, in continuous time, acquire the form of stochastic differential equations. The objective function typically respects to the expected value of a sequence of utility levels that range from the initial date t=0 to some future horizon. In the case of economic problems, the horizon is commonly assumed as infinite and the future is discounted at a constant rate ρ>0. Taking the autonomous case, for which time is not an explicit argument of the problem's functional, the objective function acquires the following shape, is assumed to be a real-valued continuous and differentiable function. Two categories of variables constitute the arguments of f, namely the state variables, ( ) n x t ∈ and the control variables, ( ) m u t ∈ . State variables are those that have their laws of motion determined by the differential equations respecting to the problem's constraints; control variables are the ones that the decision-maker is able to control in order to pursue the specified dynamic goal.The constraints underlying the optimization problem are, as mentioned above, stochastic differential equations. A generic specification is the following, ( ) If, for all 0≤s