In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable. In this paper we study one of the possible variants of smooth approximation of probability criteria in stochastic programming problems. The research is applied to the optimization problems of the probability function and the quantile function for the loss functional depending on the control vector and one-dimensional absolutely continuous random variable. The main idea of the approximation is to replace the discontinuous Heaviside function in the integral representation of the probability function with a smooth function having such properties as continuity, smoothness, and easily computable derivatives. An example of such a function is the distribution function of a random variable distributed according to the logistic law with zero mean and finite dispersion, which is a sigmoid. The value inversely proportional to the root of the variance is a parameter that provides the proximity of the original function and its approximation. This replacement allows us to obtain a smooth approximation of the probability function, and for this approximation derivatives by the control vector and by other parameters of the problem can be easily found. The article proves the convergence of the probability function approximation obtained by replacing the Heaviside function with the sigmoidal function to the original probability function, and the error estimate of such approximation is obtained. Next, approximate expressions for the derivatives of the probability function by the control vector and the parameter of the function are obtained, their convergence to the true derivatives is proved under a number of conditions for the loss functional. Using known relations between derivatives of probability functions and quantile functions, approximate expressions for derivatives of quantile function by control vector and by the level of probability are obtained. Examples are considered to demonstrate the possibility of applying the proposed estimates to the solution of stochastic programming problems with criteria in the form of a probability function and a quantile function, including in the case of a multidimensional random variable.
In this paper, we provide an approximation method for probability function and its derivatives, which allows using the first order numerical algorithms in stochastic optimization problems with objectives of that type. The approximation is based on the replacement of the indicator function with a smooth differentiable approximation – the sigmoid function. We prove the convergence of the approximation to the original function and the convergence of their derivatives to the derivatives of the original ones. This approximation method is highly universal and can be applied in other problems besides stochastic optimization – the approximation of the kernel of the probability measure, considered in the present article as an example, and the confidence absorbing set approximations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.