In this paper we study two problems which often occur in various applications arising in wireless sensor networks. These are the problem of reaching an agreement on the value of local variables in a network of computational agents and the problem of cooperative solution to a convex optimization problem, where the objective function is the aggregate sum of local convex objective functions. We incorporate the presence of a random communication graph between the agents in our model as a more realistic abstraction of the gossip and broadcast communication protocols of a wireless network. An added ingredient is the presence of local constraint sets to which the local variables of each agent is constrained. Our model allows for the objective functions to be nondifferentiable and accommodates the presence of noisy communication links and subgradient errors. For the consensus problem we provide a diminishing step size algorithm which guarantees asymptotic convergence. The distributed optimization algorithm uses two diminishing step size sequences to account for communication noise and subgradient errors. We establish conditions on these step sizes under which we can achieve the dual task of reaching consensus and convergence to the optimal set with probability one. In both cases we consider the constant step size behavior of the algorithm and establish asymptotic error bounds.
Abstract-In this paper we deal with two problems which are of great interest in the field of distributed decision making and control. The first problem we tackle is the problem of achieving consensus on a vector of local decision variables in a network of computational agents when the decision variables of each node are constrained to lie in a subset of the Euclidean space. Such constraints arise out of consideration of local characteristics of each node. We assume that the constraint sets for the local variables are private information for each node. We provide a distributed algorithm for the case when there is communication noise present in the network. We show that we can achieve almost sure convergence under certain assumptions. The second problem we discuss is the problem of distributed constrained optimization when the constraint sets are distributed over the agents. Furthermore our model incorporates the presence of noisy communication links and the presence of stochastic errors in the evaluation of subgradients of the local objective function. We establish sufficient conditions and provide an analysis guaranteeing the convergence of the algorithm to the optimal set with probability one.
We consider a min-max optimization problem over a time-varying network of computational agents, where each agent in the network has its local convex cost function which is a private knowledge of the agent. The agents want to jointly minimize the maximum cost incurred by any agent in the network, while maintaining the privacy of their objective functions. To solve the problem, we consider subgradient algorithms where each agent computes its own estimates of an optimal point based on its own cost function, and it communicates these estimates to its neighbors in the network. The algorithms employ techniques from convex optimization, stochastic approximation and averaging protocols (typically used to ensure a proper information diffusion over a network), which allow time-varying network structure. We discuss two algorithms, one based on exact-penalty approach and the other based on primal-dual Lagrangian approach, where both approaches utilize Bregman-distance functions. We establish convergence of the algorithms (with probability one) for a diminishing step-size, and demonstrate the applicability of the algorithms by considering a power allocation problem in a cellular network.
We consider a setup where we are given a network of agents with their local objective functions which are coupled through a common decision variable. We provide a distributed stochastic gradient algorithm for the agents to compute an optimal decision variable that minimizes the worst case loss incurred by any agent. We establish almost sure convergence of the agent's estimates to a common optimal point. We demonstrate the use of our algorithm to a problem of min-max fair power allocation in a cellular network.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.