2022 IEEE 38th International Conference on Data Engineering (ICDE) 2022
DOI: 10.1109/icde53745.2022.00238
|View full text |Cite
|
Sign up to set email alerts
|

FedADMM: A Robust Federated Deep Learning Framework with Adaptivity to System Heterogeneity

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(9 citation statements)
references
References 17 publications
0
3
0
Order By: Relevance
“…While some limited works, e.g. [6], [7], [19], [23], have derived convergence results for federated learning algorithms without relying on data similarity assumptions, their results are confined to specific algorithms with fixed step sizes and cannot be extended to analyze other federated algorithms. Notably, our work makes a significant contribution by expanding the results in [6], [7], which only cover (strongly) convex problems, and by generalizing the results in [23], which requires restrictive assumptions on the Lipschitz continuity of the Hessian and on the bounded 4 th -moment of the variance, i.e.…”
Section: Prior Workmentioning
confidence: 99%
See 2 more Smart Citations
“…While some limited works, e.g. [6], [7], [19], [23], have derived convergence results for federated learning algorithms without relying on data similarity assumptions, their results are confined to specific algorithms with fixed step sizes and cannot be extended to analyze other federated algorithms. Notably, our work makes a significant contribution by expanding the results in [6], [7], which only cover (strongly) convex problems, and by generalizing the results in [23], which requires restrictive assumptions on the Lipschitz continuity of the Hessian and on the bounded 4 th -moment of the variance, i.e.…”
Section: Prior Workmentioning
confidence: 99%
“…Similarly, SCAFFOLD [16], FedSplit [17], and FedPD [18] harness variance reduction, operator splitting, and ADMM techniques respectively. FedPD was later refined into FedADMM [19] to expedite convergence.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…For example, VRL-SGD and SCAFFOLD do not consider HLU to address the system heterogeneity, and FedProx and FedNova still suffer from convergence slowdown caused by non-i.i.d. data. Interestingly, recent findings show that primal-dual FL methods based on the alternating direction method of multipliers (ADMM) (Boyd et al 2010) (Hajinezhad et al 2016) are inherently resilient to both data and system heterogeneity, see, e.g., FedPD (Zhang et al 2021), FedADMM (Gong, Li, and Freris 2022) and FedDyn (Acar et al 2021). However, their convergences rely on the constant and uniform client sampling, and the requirement of the clients to either solve the local subproblems globally or to a sufficient accuracy.…”
Section: Introductionmentioning
confidence: 99%
“…The Alternating Direction Method of Multipliers (ADMM) is an iterative algorithm that transforms optimization problems into an augmented Lagrangian function and updates primal and dual variables alternately to reach the optimal solution [13]. ADMM has been shown to achieve higher solution accuracy in various disciplines, such as matrix completion and separation [79], [100], compressive sensing [16], [103], and machine learning [27], [62], [108], [114], [117]. Moreover, as a primaldual scheme, ADMM is more stable.…”
Section: Introductionmentioning
confidence: 99%