In this paper, the distributed resource allocation optimization problem is investigated. The allocation decisions are made to minimize the sum of all the agents' local objective functions while satisfying both the global network resource constraint and the local allocation feasibility constraints. Here the data corresponding to each agent in this separable optimization problem, such as the network resources, the local allocation feasibility constraint, and the local objective function, is only accessible to individual agent and cannot be shared with others, which renders new challenges in this distributed optimization problem. Based on either projection or differentiated projection, two classes of continuous-time algorithms are proposed to solve this distributed optimization problem in an initialization-free and scalable manner. Thus, no re-initialization is required even if the operation environment or network configuration is changed, making it possible to achieve a "plug-and-play" optimal operation of networked heterogeneous agents. The algorithm convergence is guaranteed for strictly convex objective functions, and the exponential convergence is proved for strongly convex functions without local constraints. Then the proposed algorithm is applied to the distributed economic dispatch problem in power grids, to demonstrate how it can achieve the global optimum in a scalable way, even when the generation cost, or system load, or network configuration, is changing.
In this paper, we study a distributed continuous-time design for aggregative games with coupled constraints in order to seek the generalized Nash equilibrium by a group of agents via simple local information exchange. To solve the problem, we propose a distributed algorithm based on projected dynamics and non-smooth tracking dynamics, even for the case when the interaction topology of the multi-agent network is time-varying. Moreover, we prove the convergence of the non-smooth algorithm for the distributed game by taking advantage of its special structure and also combining the techniques of the variational inequality and Lyapunov function.
In this paper, we propose a distributed primal-dual algorithm for computation of a generalized Nash equilibrium (GNE) in noncooperative games over network systems. In the considered game, not only each player's local objective function depends on other players' decisions, but also the feasible decision sets of all the players are coupled together with a globally shared affine inequality constraint. Adopting the variational GNE, that is the solution of a variational inequality, as a refinement of GNE, we introduce a primal-dual algorithm that players can use to seek it in a distributed manner. Each player only needs to know its local objective function, local feasible set, and a local block of the affine constraint. Meanwhile, each player only needs to observe the decisions on which its local objective function explicitly depends through the interference graph and share information related to multipliers with its neighbors through a multiplier graph. Through a primal-dual analysis and an augmentation of variables, we reformulate the problem as finding the zeros of a sum of monotone operators. Our distributed primal-dual algorithm is based on forward-backward operator splitting methods. We prove its convergence to the variational GNE for fixed step-sizes under some mild assumptions. Then a distributed algorithm with inertia is also introduced and analyzed for variational GNE seeking. Finally, numerical simulations for network Cournot competition are given to illustrate the algorithm efficiency and performance.Engineering network systems, like power grids, communication networks, transportation networks and sensor networks, play a foundation role in modern society. The efficient and secure operation of various network systems relies on efficiently solving decision and control problems arising in those large scale network systems. In many decision problems, the nodes can be regarded as agents that need to make local decisions possibly limited by the shared network resources within local feasible sets. Meanwhile, each agent has a local cost/utility function to be optimized, which depends on the decisions of other agents. The traditional manner for solving such decision problems over networks is the centralized optimization approach, which relies on a control center to gather the data of the problem and to optimize the social welfare (usually taking the form of the sum of local objective functions) within the local and global constraints. The centralized optimization approach may not be suitable for decision problems over large scale networks, since it needs bidirectional communication between all the network nodes and the control center, it is not robust to the failure of the center node, and the computational burden for the center is unbearable. It is also not preferable because the privacy of each agent might be compromised when the data is transferred $
This technical note studies the distributed optimization problem of a sum of nonsmooth convex cost functions with local constraints. At first, we propose a novel distributed continuous-time projected algorithm, in which each agent knows its local cost function and local constraint set, for the constrained optimization problem. Then we prove that all the agents of the algorithm can find the same optimal solution, and meanwhile, keep the states bounded while seeking the optimal solutions. We conduct a complete convergence analysis by employing nonsmooth Lyapunov functions for the stability analysis of differential inclusions. Finally, we provide a numerical example for illustration.
Index TermsConstrained distributed optimization, continuous-time algorithms, multi-agent systems, nonsmooth analysis, projected dynamical systems.
This paper studies distributed algorithms for the extended monotropic optimization problem, which is a general convex optimization problem with a certain separable structure. The considered objective function is the sum of local convex functions assigned to agents in a multi-agent network, with private set constraints and affine equality constraints. Each agent only knows its local objective function, local constraint set, and neighbor information. We propose two novel continuous-time distributed subgradient-based algorithms with projected output feedback and derivative feedback, respectively, to solve the extended monotropic optimization problem. Moreover, we show that the algorithms converge to the optimal solutions under some mild conditions, by virtue of variational inequalities, Lagrangian methods, decomposition methods, and nonsmooth Lyapunov analysis. Finally, we give two examples to illustrate the applications of the proposed algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.