In this paper, we study the problem of finding the least square solutions of over-determined linear algebraic equations over networks in a distributed manner. Each node has access to one of the linear equations and holds a dynamic state. We first propose a distributed least square solver over connected undirected interaction graphs and establish a necessary and sufficient on the step-size under which the algorithm exponentially converges to the least square solution. Next, we develop a distributed least square solver over strongly connected directed graphs and show that the proposed algorithm exponentially converges to the least square solution provided the step-size is sufficiently small. Moreover, we develop a finite-time least square solver by equipping the proposed algorithms with a finite-time decentralized computation mechanism. The theoretical findings are validated and illustrated by numerical simulation examples.
In this paper, we propose a fully distributed algorithm for second-order continuous-time multi-agent systems to solve the distributed optimization problem. The global objective function is a sum of private cost functions associated with the individual agents and the interaction between agents is described by a weighted undirected graph. We show the exponential convergence of the proposed algorithm if the underlying graph is connected, each private cost function is locally gradient-Lipschitz-continuous, and the global objective function is restricted strongly convex with respect to the global minimizer. Moreover, to reduce the overall need of communication, we then propose a dynamic event-triggered communication mechanism that is free of Zeno behavior. It is shown that the exponential convergence is achieved if the private cost functions are also globally gradient-Lipschitz-continuous. Numerical simulations are provided to illustrate the effectiveness of the theoretical results.
We develop a distributed stochastic gradient descent algorithm for solving non-convex optimization problems under the assumption that the local objective functions are twice continuously differentiable with Lipschitz continuous gradients and Hessians. We provide sufficient conditions on step-sizes that guarantee the asymptotic mean-square convergence of the proposed algorithm. We apply the developed algorithm to a distributed supervised-learning problem, in which a set of networked agents collaboratively train their individual neural nets to recognize handwritten digits in images. Results indicate that all agents report similar performance that is also comparable to the performance of a centrally trained neural net. Numerical results also show that the proposed distributed algorithm allows the individual agents to recognize the digits even though the training data corresponding to all the digits is not locally available to each agent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.