Given an undirected graph G = (N , E) of agents N = {1, . . . , N } connected with edges in E, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φ i } i∈N while respecting privacy requirements, where Φ i ξ i + f i belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient method DPGA for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξ i and gradient of f i , and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇f i at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both sub-optimality error and consensus violation for DPGA and SDPGA with rates O(1/t) and O(1/ √ t),This computational setting, i.e., decentralized consensus optimization, appears as a generic model for various applications in signal processing, e.g., [2]-[6], machine learning, e.g., [7]- [9] and statistical inference, e.g., [10], [11]. Clearly, (3) can also be solved in a "centralized" fashion by communicating all the private functions Φ i to a central node, and solving the overall problem at this node. However, such an approach can be very expensive both from communication January 3, 2017 DRAFT solutionx = [x i ] i∈N such that its consensus violation max{ x i −x j 2 : (i, j) ∈ E} ≤ within O(1) iterations; and its suboptimality is bounded from above as i∈N Φ i (x i ) − F * ≤ within O(1/ 2 ) iterations; however, since the step size is constant, neither suboptimality nor consensus errors are guaranteed to decrease further. Although these algorithms are for more general problems and assume mere convexity on each Φ i , this generality comes at the cost of O(1/ 2 ) complexity bounds, and they also tend to be very slow in practice. On the other extreme, under much stronger conditions: assuming each Φ i is smooth and has bounded gradients, Jakovetic et al. [19] developed a fast distributed gradient method D-NC with O(log(1/ )/ √ ) convergence rate in communication rounds. For the quadratic loss, which is one of the most commonly used loss functions, bounded gradient assumption does not hold. In terms of distributed applicability, D-NC requires all the nodes N to agree on a doubly stochastic weight matrix W ∈ R |N |×|N | ; it also assumes that the second largest eigenvalue of W ∈ R |N |×|N | is known globally among all the nodes -this is not attainable for very large scale fully distributed networks. D-NC is a two-loop algorithm: for each outer loop k, each node computes their gradients once, and it is followed by O(log(k)) communication rounds. In the rest, we briefly discuss those algorithms that balance the trade-off between the iterati...
In this paper, "chance optimization" problems are introduced, where one aims at maximizing the probability of a set defined by polynomial inequalities. These problems are, in general, nonconvex and computationally hard. With the objective of developing systematic numerical procedures to solve such problems, a sequence of convex relaxations based on the theory of measures and moments is provided, whose sequence of optimal values is shown to converge to the optimal value of the original problem. Indeed, we provide a sequence of semidefinite programs of increasing dimension which can arbitrarily approximate the solution of the original problem. To be able to efficiently solve the resulting large-scale semidefinite relaxations, a first-order augmented Lagrangian algorithm is implemented. Numerical examples are presented to illustrate the computational performance of the proposed approach.
Abstract. We propose a First-order Augmented Lagrangian algorithm (FAL) for solving the basis pursuit problem. FAL computes a solution to this problem by inexactly solving a sequence of 1 -regularized least squares sub-problems. These sub-problems are solved using an infinite memory proximal gradient algorithm wherein each update reduces to "shrinkage" or constrained "shrinkage". We show that FAL converges to an optimal solution of the basis pursuit problem whenever the solution is unique, which is the case with very high probability for compressed sensing problems. We construct a parameter sequence such that the corresponding FAL iterates are -feasible and -optimal for all > 0 within O log −1 FAL iterations. Moreover, FAL requires at most O( −1 ) matrix-vector multiplications of the form Ax or A T y to compute an -feasible, -optimal solution. We show that FAL can be easily extended to solve the basis pursuit denoising problem when there is a non-trivial level of noise on the measurements. We report the results of numerical experiments comparing FAL with the state-of-the-art solvers for both noisy and noiseless compressed sensing problems. A striking property of FAL that we observed in the numerical experiments with randomly generated instances when there is no measurement noise was that FAL always correctly identifies the support of the target signal without any thresholding or post-processing, for moderately small error tolerance values.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.