2019
DOI: 10.48550/arxiv.1912.07902
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Asynchronous Federated Learning with Differential Privacy for Edge Intelligence

Yanan Li,
Shusen Yang,
Xuebin Ren
et al.

Abstract: Federated learning has been showing as a promising approach in paving the last mile of artificial intelligence, due to its great potential of solving the data isolation problem in large scale machine learning. Particularly, with consideration of the heterogeneity in practical edge computing systems, asynchronous edge-cloud collaboration based federated learning can further improve the learning efficiency by significantly reducing the straggler effect. Despite no raw data sharing, the open architecture and exte… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(8 citation statements)
references
References 35 publications
0
8
0
Order By: Relevance
“…In [39], Mitliagkas et al demonstrated that the stale gradients could have the same effect as momentum when the staleness follows a geometric distribution. In fact, consistent stale gradients can boost the convergence, analogous to the function of a large momentum [39], [28]. However, when the model is close to the optimal solution, the momentum may fluctuate the training process.…”
Section: Interplay Of Non-iid Data and Stalenessmentioning
confidence: 99%
See 2 more Smart Citations
“…In [39], Mitliagkas et al demonstrated that the stale gradients could have the same effect as momentum when the staleness follows a geometric distribution. In fact, consistent stale gradients can boost the convergence, analogous to the function of a large momentum [39], [28]. However, when the model is close to the optimal solution, the momentum may fluctuate the training process.…”
Section: Interplay Of Non-iid Data and Stalenessmentioning
confidence: 99%
“…Since stale model is possibly farther from optimal solution and its derivative has a larger norm, stale gradients generally have larger norm for convex loss function. Therefore, the selected consistent stale gradients can help converge faster [39], [28].…”
Section: Basic Idea Of Wkaflmentioning
confidence: 99%
See 1 more Smart Citation
“…The experiment is carried out on three real-world datasets, with the results demonstrating the high accuracy and efficiency of the scheme. Similarly, in [47], the convergence of AFL while adopting differential privacy is analyzed. Based on the analysis, a multi-stage adjustable private algorithm that dynamically changes the noise size and the learning rate is proposed to optimize the trade-off between model utility and privacy.…”
Section: Differential Privacy On Heterogeneous Devicesmentioning
confidence: 99%
“…They perform the convergence boosting by updates verification and weighted aggregation without any theoretical analysis. Li et al [17] aims to secure asynchronous edge-cloud collaborative federated learning with differential privacy. But they choose centralized learning and conduct analysis under the convex condition.…”
Section: Differentially Private Distributed Learningmentioning
confidence: 99%