Abstract-In this paper, we consider distributed optimization problems where the goal is to minimize a sum of objective functions over a multi-agent network. We focus on the case when the inter-agent communication is described by a strongly-connected, directed graph. The proposed algorithm, ADD-OPT (Accelerated Distributed Directed Optimization), achieves the best known convergence rate for this class of problems, O(µ k ), 0 < µ < 1, given strongly-convex, objective functions with globally Lipschitzcontinuous gradients, where k is the number of iterations. Moreover, ADD-OPT supports a wider and more realistic range of step-sizes in contrast to existing work. In particular, we show that ADD-OPT converges for arbitrarily small (positive) stepsizes. Simulations further illustrate our results.
This paper considers a distributed optimization problem over a multi-agent network, in which the objective function is a sum of individual cost functions at the agents. We focus on the case when communication between the agents is described by a directed graph. Existing distributed optimization algorithms for directed graphs require at least the knowledge of the neighbors' out-degree at each agent (due to the requirement of column-stochastic matrices). In contrast, our algorithm requires no such knowledge. Moreover, the proposed algorithm achieves the best known rate of convergence for this class of problems, O(µ k ) for 0 < µ < 1, where k is the number of iterations, given that the objective functions are strongly-convex and have Lipschitz-continuous gradients. Numerical experiments are also provided to illustrate the theoretical findings.
Abstract-We propose Directed-Distributed Projected Subgradient (D-DPS) to solve a constrained optimization problem over a multi-agent network, where the goal of agents is to collectively minimize the sum of locally known convex functions. Each agent in the network owns only its local objective function, constrained to a commonly known convex set. We focus on the circumstance when communications between agents are described by a directed network. The D-DPS combines surplus consensus to overcome the asymmetry caused by the directed communication network. The analysis shows the convergence rate to be O().
In this paper, we discuss distributed optimization over directed graphs, where doubly-stochastic weights cannot be constructed. Most of the existing algorithms overcome this issue by applying pushsum consensus, which utilizes column-stochastic weights. The formulation of column-stochastic weights requires each agent to know (at least) its out-degree, which may be impractical in e.g., broadcastbased communication protocols. In contrast, we describe FROST (Fast Row-stochastic-Optimization with uncoordinated STep-sizes), an optimization algorithm applicable to directed graphs that does not require the knowledge of out-degrees; the implementation of which is straightforward as each agent locally assigns weights to the incoming information and locally chooses a suitable step-size. We show that FROST converges linearly to the optimal solution for smooth and strongly-convex functions given that the largest step-size is positive and sufficiently small.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.