2020
DOI: 10.1080/00207721.2020.1801885
|View full text |Cite
|
Sign up to set email alerts
|

Time-varying multi-objective optimisation over switching graphs via fixed-time consensus algorithms

Abstract: This paper considers distributed multi-objective optimisation problems with time-varying cost functions for network connected multi-agent systems over switching graphs. The scalarisation approach is used to convert the problem into a weighted-sum objective. Fixed-time consensus algorithms are developed for each agent to estimate the global variables, and drive all local copies of the decision vector to a consensus. The algorithm with fixed gains is first proposed, where some global information is required to c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
15
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 14 publications
(15 citation statements)
references
References 36 publications
0
15
0
Order By: Relevance
“…where the last inequality follows the strict convexity of the loss function (see Lemma 3). Recall that from Lemma 5 the derivatives dλ1,j dα | α=0 and dλ2,j dα | α=0 depend on the eigenvalues of (15), which clearly form a lower triangular matrix with m zero eigenvalues and m negative eigenvalues (following ( 16)). Therefore, dλ1,j dα | α=0 = 0 and dλ2,j dα | α=0 < 0, which implies that considering αM 1 as a perturbation, the m zero eigenvalues λ 2,j (α) of M move toward the LHP while λ 1,j (α)'s remain zero.…”
Section: Proof Of Convergencementioning
confidence: 99%
See 3 more Smart Citations
“…where the last inequality follows the strict convexity of the loss function (see Lemma 3). Recall that from Lemma 5 the derivatives dλ1,j dα | α=0 and dλ2,j dα | α=0 depend on the eigenvalues of (15), which clearly form a lower triangular matrix with m zero eigenvalues and m negative eigenvalues (following ( 16)). Therefore, dλ1,j dα | α=0 = 0 and dλ2,j dα | α=0 < 0, which implies that considering αM 1 as a perturbation, the m zero eigenvalues λ 2,j (α) of M move toward the LHP while λ 1,j (α)'s remain zero.…”
Section: Proof Of Convergencementioning
confidence: 99%
“…which completes the proof. Theorem 1, similar to [10]- [15], only requires strict convexity of the loss function, as compared to strong convexity in [20]- [26]. Moreover, the matrix perturbation method allows eigen-spectrum analysis of the time-varying matrix M , including possible discrete jumps in the hybrid mode.…”
Section: Proof Of Convergencementioning
confidence: 99%
See 2 more Smart Citations
“…Distributed learning over network‐connected multiagent systems has attracted significant research interests, due to its wide applications in the fields of social science, economy and engineering 6–10 . Recently, there have been a number of pioneering works dedicated to constructing distributed learning frameworks using consensus‐based approaches, for example, References [11–14].…”
Section: Introductionmentioning
confidence: 99%