2021
DOI: 10.48550/arxiv.2103.05378
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Decentralized Non-Convex Learning with Linearly Coupled Constraints

Abstract: Motivated by the need for decentralized learning, this paper aims at designing a distributed algorithm for solving nonconvex problems with general linear constraints over a multi-agent network. In the considered problem, each agent owns some local information and a local variable for jointly minimizing a cost function, but local variables are coupled by linear constraints. Most of the existing methods for such problems are only applicable for convex problems or problems with specific linear constraints. There … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 39 publications
0
3
0
Order By: Relevance
“…Besides, extension of the LSGT algorithm for the learning problem over the hybrid data, i.e., the MUST algorithm, is investigated in Sec. 5. Numerical results are given in Sec.…”
Section: Contributionmentioning
confidence: 99%
See 2 more Smart Citations
“…Besides, extension of the LSGT algorithm for the learning problem over the hybrid data, i.e., the MUST algorithm, is investigated in Sec. 5. Numerical results are given in Sec.…”
Section: Contributionmentioning
confidence: 99%
“…The proposed LSGT algorithm is presented in Algorithm 1. Comparing to the vanilla stochastic GT method in (5), in the proposed LSGT algorithm, each agent n executes E consecutive steps of SGD within each communication round. In particular, in each step q ∈ [E], agent n performs gradient descent along the direction v r,q−1 n as in (7a).…”
Section: Proposed Lsgt Algorithmmentioning
confidence: 99%
See 1 more Smart Citation