2020
DOI: 10.1109/tsp.2020.2984895
|View full text |Cite
|
Sign up to set email alerts
|

Network Dissensus via Distributed ADMM

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 69 publications
0
3
0
Order By: Relevance
“…ADMM is a popular technique used for solving convex optimization problems in machine learning and deep leaning, making possible a large-scale optimization [26]. Recent works also demonstrate that under certain conditions, the ADMM is guaranteed to converge for non-convex problems [27]. Specifically, the ADMM can separate the variables and decompose the problem into two subproblems.…”
Section: Preliminary: Admmmentioning
confidence: 99%
“…ADMM is a popular technique used for solving convex optimization problems in machine learning and deep leaning, making possible a large-scale optimization [26]. Recent works also demonstrate that under certain conditions, the ADMM is guaranteed to converge for non-convex problems [27]. Specifically, the ADMM can separate the variables and decompose the problem into two subproblems.…”
Section: Preliminary: Admmmentioning
confidence: 99%
“…To solve this massive scalability challenge while addressing privacy, latency, reliability and bandwidth efficiency, distributed learning frameworks [19]- [23], e.g., federated learning (FL) [24]- [26] and MapReduce [44], are needed, and consequently intelligence must be pushed to the network edge in future communication systems with optimization algorithms, e.g., alternating direction method of multipliers (ADMM) [27], [28] and distributed gradient descend [29]. In these frameworks, communication units/devices/nodes are capable of collaboratively building a shared learning model with training their collected data locally.…”
Section: Introductionmentioning
confidence: 99%
“…Similarly, to increase robustness, ADMM is widely considered for large scale distributed learning. Likewise, distributed gradient descend methods are also studied for various potential applications [27], [28].…”
Section: Introductionmentioning
confidence: 99%