Abstract. We propose a new concept of generalized dierentiation of setvalued maps that captures rst order information. This concept encompasses the standard notions of Fréchet dierentiability, strict dierentiability, calmness and Lipschitz continuity in single-valued maps, and the Aubin property and Lipschitz continuity in set-valued maps. We present calculus rules, sharpen the relationship between the Aubin property and coderivatives, and study how metric regularity and open covering can be rened to have a directional property similar to our concept of generalized dierentiation. Finally, we discuss the relationship between the robust form of generalization dierentiation and its one sided counterpart.
Consider the setting where each vertex of a graph has a function, and communications can only occur between vertices connected by an edge. We wish to minimize the sum of these functions. For the case when each function is the sum of a strongly convex quadratic and a convex function, we propose a distributed version of Dykstra's algorithm. The computations to optimize the dual objective function can run asynchronously without a global clock, and in a distributed manner without a central controller. Convergence to the primal minimizer is deterministic instead of being probabilistic, and is guaranteed as long as in each cycle, the edges where two-way communications occur connects all vertices. We also look at an accelerated algorithm, and an algorithm for the case when the functions on the nodes are not strongly convex.Proof. The equivalence between (1) and (3) is easy, and the equivalence between (1) and (2) is simple linear algebra.Definition 2.2. We say that E ′ connects V if any of the equivalent properties in Proposition 2.1 is satisfied.We prove a lemma.Lemma 2.3. (Expressing v as a sum) Suppose X is a finite dimensional Hilbert space. There is a C 1 > 0 such that for all v ∈ D ⊥ and E ′ ⊂ E such that E ′ connects
Abstract. Computing mountain passes is a standard way of finding critical points. We describe a numerical method for finding critical points that is convergent in the nonsmooth case and locally superlinearly convergent in the smooth finite dimensional case. We apply these techniques to describe a strategy for the Wilkinson problem of calculating the distance of a matrix to a closest matrix with repeated eigenvalues. Finally, we relate critical points of mountain pass type to nonsmooth and metric critical point theory.
The Set Intersection Problem (SIP) is the problem of finding a point in the intersection of convex sets. This problem is typically solved by the method of alternating projections. To accelerate the convergence, the idea of using Quadratic Programming (QP) to project a point onto the intersection of halfspaces generated by the projection process was discussed in earlier papers. This paper looks at how one can integrate projection algorithms together with an active set QP algorithm. As a byproduct of our analysis, we show how to accelerate an SIP algorithm involving box constraints, and how to extend a version of the Algebraic Reconstruction Technique (ART) while preserving finite convergence. Lastly, the warmstart property of active set QP algorithms is a valuable property for the problem of projecting onto the intersection of convex sets.
To minimize or upper-bound the value of a function "robustly", we might instead minimize or upper-bound the "ǫ-robust regularization", defined as the map from a point to the maximum value of the function within an ǫ-radius. This regularization may be easy to compute: convex quadratics lead to semidefinite-representable regularizations, for example, and the spectral radius of a matrix leads to pseudospectral computations. For favorable classes of functions, we show that the robust regularization is Lipschitz around any given point, for all small ǫ > 0, even if the original function is nonlipschitz (like the spectral radius). One such favorable class consists of the semi-algebraic functions. Such functions have graphs that are finite unions of sets defined by finitely-many polynomial inequalities, and are commonly encountered in applications.
We show that Dykstra's splitting for projecting onto the intersection of convex sets can be extended to minimize the sum of convex functions and a regularizing quadratic. We give conditions for which convergence to the primal minimizer holds so that more than one convex function can be minimized at a time, the convex functions are not necessarily sampled in a cyclic manner, and the SHQP strategy for problems involving the intersection of more than one convex set can be applied. When the sum does not involve the regularizing quadratic, we discuss an approximate proximal point method combined with Dykstra's splitting to minimize this sum.
The -pseudospectrum of a square matrix A is the set of eigenvalues attainable when A is perturbed by matrices of spectral norm not greater than . The pseudospectral abscissa is the largest real part of such an eigenvalue, and the pseudospectral radius is the largest absolute value of such an eigenvalue. We find conditions for the pseudospectrum to be Lipschitz continuous in the setvalued sense and hence find conditions for the pseudospectral abscissa and the pseudospectral radius to be Lipschitz continuous in the single-valued sense. Our approach illustrates diverse techniques of variational analysis. The points at which the pseudospectrum is not Lipschitz (or more properly, does not have the Aubin property) are exactly the critical points of the resolvent norm, which in turn are related to the coalescence points of pseudospectral components.
We propose first order algorithms for convex optimization problems where the feasible set is described by a large number of convex inequalities that is to be explored by subgradient projections. The first algorithm is an adaptation of a subgradient algorithm, and has convergence rate 1/ √ k. The second algorithm has convergence rate 1/k when (1) one has linear metric inequality in the feasible set, (2) the objective function is strongly convex, differentiable and has Lipschitz gradient, and (3) it is easy to optimize the objective function on the intersection of two halfspaces. This second algorithm generalizes Haugazeau's algorithm. The third algorithm adapts the second algorithm when condition (3) is dropped. We give examples to show that the second algorithm performs poorly when the objective function is not strongly convex, or when the linear metric inequality is absent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.