2020
DOI: 10.1002/rnc.5199
|View full text |Cite
|
Sign up to set email alerts
|

Distributed proximal‐gradient algorithms for nonsmooth convex optimization of second‐order multiagent systems

Abstract: This article studies the distributed nonsmooth convex optimization problems for second-order multiagent systems. The objective function is the summation of local cost functions which are convex but nonsmooth. Each agent only knows its local cost function, local constraint set, and neighbor information. By virtue of proximal operator and Lagrangian methods, novel continuous-time distributed proximal-gradient algorithms with derivative feedback are proposed to solve the nonsmooth convex optimization for the cons… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 22 publications
(7 citation statements)
references
References 40 publications
0
7
0
Order By: Relevance
“…The authors of [32] presented a distributed quasimonotone subgradient algorithm for non-smooth convex optimisation over directed graphs. For the distributed non-smooth convex optimisation problems in second-order multiagent systems, the authors of [33] recently developed a novel continuoustime distributed proximal-gradient algorithms with derivative feedback. Motivated by these excellent efforts [31][32][33], it would be of interest to focus on investigating distributed online learning algorithms to obtain dynamic regret bounds for non-smooth or non-convex objective functions over time-varying networks in the future work.…”
Section: Discussionmentioning
confidence: 99%
“…The authors of [32] presented a distributed quasimonotone subgradient algorithm for non-smooth convex optimisation over directed graphs. For the distributed non-smooth convex optimisation problems in second-order multiagent systems, the authors of [33] recently developed a novel continuoustime distributed proximal-gradient algorithms with derivative feedback. Motivated by these excellent efforts [31][32][33], it would be of interest to focus on investigating distributed online learning algorithms to obtain dynamic regret bounds for non-smooth or non-convex objective functions over time-varying networks in the future work.…”
Section: Discussionmentioning
confidence: 99%
“…In Reference 12. a distributed optimization system analysis and design method from the viewpoint of control system is proposed. Then, various of continuous-time systems began to be studied, and a great of continuous-time distributed algorithms [13][14][15][16][17][18][19][20][21][22][23][24][25][26][27] have been considered to tackle distributed optimization problem for continuous-time systems in last past decade. In Reference 17, an edge-based adaptive algorithm is designed for multi-agent systems with general linear dynamics.…”
Section: Introductionmentioning
confidence: 99%
“…Distributed optimization with multi‐agent networks has received considerable attention in diverse engineering problems, such as machine learning, 1 shortest distance optimization, 2 deception attacks, 3 mobile manipulators cooperative transportation, 4 smart grids, 5 resource allocation, 6,7 monotropic optimization problem 8 and so on. In the light of target decision variables being consistent or not, there are two types of distributed optimization problems in the literature.…”
Section: Introductionmentioning
confidence: 99%