The low-rank semidefinite programming problem (LRSDP r ) is a restriction of the semidefinite programming problem (SDP) in which a bound r is imposed on the rank of X, and it is well known that LRSDP r is equivalent to SDP if r is not too small. In this paper, we classify the local minima of LRSDP r and prove the optimal convergence of a slight variant of the successful, yet experimental, algorithm of Burer and Monteiro [6], which handles LRSDP r via the nonconvex change of variables X = RR T . In addition, for particular problem classes, we describe a practical technique for obtaining lower bounds on the optimal solution value during the execution of the algorithm. Computational results are presented on a set of combinatorial optimization relaxations, including some of the largest quadratic assignment SDPs solved to date.
In this paper we analyze the iteration complexity of the hybrid proximal extragradient (HPE) method for finding a zero of a maximal monotone operator recently proposed by Solodov and Svaiter. One of the key points of our analysis is the use of new termination criteria based on the ε-enlargement of a maximal monotone operator. The advantage of using these termination criteria is that their definition do not depend on the boundedness of the domain of the operator. We then show that Korpelevich's extragradient method for solving monotone variational inequalities falls in the framework of the HPE method. As a consequence, using the complexity analysis of the HPE method, we obtain new complexity bounds for Korpelevich's extragradient method which do not require the feasible set to be bounded, as assumed in a recent paper by Nemirovski. Another feature of our analysis is that the derived iteration-complexity bounds are proportional to the distance of the initial point to the solution set. The HPE framework is also used to obtain the first iterationcomplexity result for Tseng's modified forward-backward splitting method for finding a zero of the sum of a monotone Lipschitz continuous map with an arbitrary maximal monotone operator whose resolvent is assumed to be easily computable. Also using the framework of the HPE method, we study the complexity of a variant of a Newton-type extragradient algorithm proposed by Solodov and Svaiter for finding a zero of a smooth monotone function with Lipschitz continuous Jacobian.
Summary. We introduce a general formulation for dimension reduction and coefficient estimation in the multivariate linear model. We argue that many of the existing methods that are commonly used in practice can be formulated in this framework and have various restrictions. We continue to propose a new method that is more flexible and more generally applicable. The method proposed can be formulated as a novel penalized least squares estimate. The penalty that we employ is the coefficient matrix's Ky Fan norm. Such a penalty encourages the sparsity among singular values and at the same time gives shrinkage coefficient estimates and thus conducts dimension reduction and coefficient estimation simultaneously in the multivariate linear model. We also propose a generalized cross-validation type of criterion for the selection of the tuning parameter in the penalized least squares. Simulations and an application in financial econometrics demonstrate competitive performance of the new method. An extension to the non-parametric factor model is also discussed.
This paper presents an accelerated variant of the hybrid proximal extragradient (HPE) method for convex optimization, referred to as the accelerated HPE (A-HPE) framework. Iterationcomplexity results are established for the A-HPE framework, as well as a special version of it, where a large stepsize condition is imposed. Two specific implementations of the A-HPE framework are described in the context of a structured convex optimization problem whose objective function consists of the sum of a smooth convex function and an extended real-valued non-smooth convex function. In the first implementation, a generalization of a variant of Nesterov's method is obtained for the case where the smooth component of the objective function has Lipschitz continuous gradient. In the second implementation, an accelerated Newton proximal extragradient (A-NPE) method is obtained for the case where the smooth component of the objective function has Lipschitz continuous Hessian. It is shown that the A-NPE method has a O(1/k 7/2 ) convergence rate, which improves upon the O(1/k 3 ) convergence rate bound for another accelerated Newton-type method presented by Nesterov. Finally, while Nesterov's method is based on exact solutions of subproblems with cubic regularization terms, the A-NPE method is based on inexact solutions of subproblems with quadratic regularization terms, and hence is potentially more tractable from a computational point of view.
In this paper, we consider the monotone inclusion problem consisting of the sum of a continuous monotone map and a point-to-set maximal monotone operator with a separable two-block structure and introduce a framework of block-decomposition prox-type algorithms for solving it which allows for each one of the single-block proximal subproblems to be solved in an approximate sense. Moreover, by showing that any method in this framework is also a special instance of the hybrid proximal extragradient (HPE) method introduced by Solodov and Svaiter, we derive corresponding convergence rate results. We also describe some instances of the framework based on specific and inexpensive schemes for solving the single-block proximal subproblems. Finally, we consider some applications of our methodology to establish for the first time: i) the iteration-complexity of an algorithm for finding a zero of the sum of two arbitrary maximal monotone operators, and; ii) the ergodic iteration-complexity of the classical alternating direction method of multipliers for a class of linearly constrained convex programming problems with proper closed convex objective functions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.