We study strongly convex distributed optimization problems where a set of agents are interested in solving a separable optimization problem collaboratively. In this paper, we propose and study a two time-scale decentralized gradient descent algorithm for a board class of lossy sharing of information over time-varying graphs. One time-scale fades out the (lossy) incoming information from neighboring agents, and one time-scale regulates the local loss functions' gradients. For strongly convex loss functions, with a proper choice of step-sizes, we show that the agents' estimates converge to the global optimal state at a rate of O T −1/2 . Another important contribution of this work is to provide novel tools to deal with diminishing average weights over time-varying graphs.
In this work, we study convex distributed optimization problems where a set of agents are interested in solving a separable optimization problem collaboratively with noisy/lossy information sharing over time-varying networks. We study the almost sure convergence of a two-time-scale decentralized gradient descent algorithm to reach the consensus on an optimizer of the objective loss function. One time scale fades out the imperfect incoming information from neighboring agents, and the second one adjusts the local loss functions' gradients. We show that under certain conditions on connectivity of the underlying time-varying network and the time-scale sequences, the dynamics converge almost surely to an optimal point supported in the optimizer set of the loss function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.