In this work, using Moreau envelopes, we define a complete metric for the set of proper lower semicontinuous convex functions. Under this metric, the convergence of each sequence of convex functions is epi-convergence. We show that the set of strongly convex functions is dense but it is only of the first category. On the other hand, it is shown that the set of convex functions with strong minima is of the second category.
Locating proximal points is a component of numerous minimization algorithms. This work focuses on developing a method to find the proximal point of a convex function at a point, given an inexact oracle. Our method assumes that exact function values are at hand, but exact subgradients are either not available or not useful. We use approximate subgradients to build a model of the objective function, and prove that the method converges to the true prox-point within acceptable tolerance. The subgradient g k used at each step k is such that the distance from g k to the true subdifferential of the objective function at the current iteration point is bounded by some fixed ε > 0. The algorithm includes a novel tilt-correct step applied to the approximate subgradient.
Using the Moore-Penrose pseudoinverse, this work generalizes the gradient approximation technique called centred simplex gradient to allow sample sets containing any number of points. This approximation technique is called the generalized centred simplex gradient. We develop error bounds and, under a full-rank condition, show that the error bounds have order O(∆ 2 ), where ∆ is the radius of the sample set of points used. We establish calculus rules for generalized centred simplex gradients, introduce a calculus-based generalized centred simplex gradient and confirm that error bounds for this new approach are also order O(∆ 2 ). We provide several examples to illustrate the results and some benefits of these new methods.
The VU-algorithm is a superlinearly convergent method for minimizing nonsmooth, convex functions. At each iteration, the algorithm works with a certain V-space and its orthogonal U -space, such that the nonsmoothness of the objective function is concentrated on its projection onto the V-space, and on the U -space the projection is smooth. This structure allows for an alternation between a Newtonlike step where the function is smooth, and a proximal-point step that is used to find iterates with promising VU-decompositions. We establish a derivative-free variant of the VU-algorithm for convex finite-max objective functions. We show global convergence and provide numerical results from a proof-of-concept implementation, which demonstrates the feasibility and practical value of the approach. We also carry out some tests using nonconvex functions and discuss the results.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.