Abstract. This paper introduces a parallel and distributed extension to the alternating direction method of multipliers (ADMM) for solving convex problem:The algorithm decomposes the original problem into N smaller subproblems and solves them in parallel at each iteration. This Jacobian-type algorithm is well suited for distributed computing and is particularly attractive for solving certain large-scale problems.This paper introduces a few novel results. Firstly, it shows that extending ADMM straightforwardly from the classic Gauss-Seidel setting to the Jacobian setting, from 2 blocks to N blocks, will preserve convergence if matrices Ai are mutually near-orthogonal and have full column-rank. Secondly, for general matrices Ai, this paper proposes to add proximal terms of different kinds to the N subproblems so that the subproblems can be solved in flexible and efficient ways and the algorithm converges globally at a rate of o(1/k). Thirdly, a simple technique is introduced to improve some existing convergence rates from O(1/k) to o(1/k).In practice, some conditions in our convergence theorems are conservative. Therefore, we introduce a strategy for dynamically tuning the parameters in the algorithm, leading to substantial acceleration of the convergence in practice. Numerical results are presented to demonstrate the efficiency of the proposed method in comparison with several existing parallel algorithms.We implemented our algorithm on Amazon EC2, an on-demand public computing cloud, and report its performance on very large-scale basis pursuit problems with distributed data.
In this paper, we first study q minimization and its associated iterative reweighted algorithm for recovering sparse vectors. Unlike most existing work, we focus on unconstrained q minimization, for which we show a few advantages on noisy measurements and/or approximately sparse vectors. Inspired by the results in [Daubechies et al., Comm. Pure Appl. Math., 63 (2010), pp. 1-38] for constrained q minimization, we start with a preliminary yet novel analysis for unconstrained q minimization, which includes convergence, error bound, and local convergence behavior. Then, the algorithm and analysis are extended to the recovery of low-rank matrices. The algorithms for both vector and matrix recovery have been compared to some state-of-the-art algorithms and show superior performance on recovering sparse vectors and low-rank matrices.
This paper studies the long-existing idea of adding a nice smooth function to "smooth" a nondifferentiable objective function in the context of sparse optimization, in particular, the minimization of x 1 + 1 2α x 2 2 , where x is a vector, as well as the minimization of X * + 1 2α X 2 F , where X is a matrix and X * and X F are the nuclear and Frobenius norms of X, respectively. We show that they let sparse vectors and low-rank matrices be efficiently recovered. In particular, they enjoy exact and stable recovery guarantees similar to those known for the minimization of x 1 and X * under the conditions on the sensing operator such as its null-space property, restricted isometry property (RIP), spherical section property, or "RIPless" property. To recover a (nearly) sparse vector x 0 , minimizing x 1 + 1 2α x 2 2 returns (nearly) the same solution as minimizing x 1 whenever α ≥ 10 x 0 ∞. The same relation also holds between minimizing X * + 1 2α X 2 F and minimizing X * for recovering a (nearly) low-rank matrix X 0 if α ≥ 10 X 0 2. Furthermore, we show that the linearized Bregman algorithm, as well as its two fast variants, for minimizing x 1 + 1 2α x 2 2 subject to Ax = b enjoys global linear convergence as long as a nonzero solution exists, and we give an explicit rate of convergence. The convergence property does not require a sparse solution or any properties on A. To the best of our knowledge, this is the best known global convergence result for first-order sparse optimization algorithms.
Introduction.Sparse vector recovery and low-rank matrix recovery problems have drawn much attention from researchers in different fields over the past several years. They have wide applications in compressive sensing, signal/image processing, machine learning, etc. The fundamental problem of sparse vector recovery is to find the vector with (nearly) fewest nonzero entries from an underdetermined linear system Ax = b, and that of low-rank matrix recovery is to find a matrix of (nearly) lowest rank from an underdetermined A(X) = b, where A is a linear operator.To recover a sparse vector x 0 , a well-known model is the basis pursuit (BP) problem [12]:
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.