2015
DOI: 10.9746/jcmsi.8.234
|View full text |Cite
|
Sign up to set email alerts
|

Distributed Multi-Agent Optimization Based on a Constrained Subgradient Method

Abstract: This paper proposes a protocol for a distributed optimization problem to minimize the average of objective functions of the agents in the network with satisfying constraints of each agent. The protocol can handle uncommon constraints of the agents. Instead of invoking dual functions, only 1-bit information on fulfillment of the constraint of each agent is transmitted between agents as well as the decision variable. The proof of consensus and convergence is provided based on the constrained subgradient method. … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
8
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 8 publications
0
8
0
Order By: Relevance
“…12, 14, 16, and 18). The convergence speed of the synchronous algorithm with the scalable step size (13) are better than the asynchronous one of the simple diminishing step size (12), but worse than the one with the scalable step size (13). Thus, we see that the scalable diminishing step size (7) and (13) are effective to enhance the convergence speed of the algorithms.…”
Section: Numerical Examplesmentioning
confidence: 79%
See 2 more Smart Citations
“…12, 14, 16, and 18). The convergence speed of the synchronous algorithm with the scalable step size (13) are better than the asynchronous one of the simple diminishing step size (12), but worse than the one with the scalable step size (13). Thus, we see that the scalable diminishing step size (7) and (13) are effective to enhance the convergence speed of the algorithms.…”
Section: Numerical Examplesmentioning
confidence: 79%
“…5 shows that the result of p = 1 by the asynchronous algorithm with the simple diminishing step size (12), and Fig. 6 shows that the result of p = 1 with the scalable step size (13). Note that the optimal solution for this instance in (3) (p = 1) is 1.00.…”
Section: Numerical Examplesmentioning
confidence: 91%
See 1 more Smart Citation
“…We have shown (9) by Lemma 2. To prove (10) and (12), we assume the following, which would lead to a contradiction: there exist ε > 0 and integer t 1 such that for all t ≥ t 1 it holds that…”
Section: Proof Of the Convergencementioning
confidence: 99%
“…Protocols shown there involve optimization along with updates of dual variables. On the other hand, the authors have shown approaches via a protocol with additional information on fulfillment of constraints [10], [11], where the protocol works with decision variables and small portion of data that indicates fulfillment of the local constraints of agents.…”
Section: Introductionmentioning
confidence: 99%