The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
Proceedings of the 1st Workshop on Machine Learning and Systems 2021
DOI: 10.1145/3437984.3458839
|View full text |Cite
|
Sign up to set email alerts
|

Towards Mitigating Device Heterogeneity in Federated Learning via Adaptive Model Quantization

Abstract: Federated learning (FL) is increasingly becoming the norm for training models over distributed and private datasets. Major service providers rely on FL to improve services such as text auto-completion, virtual keyboards, and item recommendations. Nonetheless, training models with FL in practice requires signicant amount of time (days or even weeks) because FL tasks execute in highly heterogeneous environments where devices only have widespread yet limited computing capabilities and network connectivity conditi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 33 publications
(13 citation statements)
references
References 13 publications
0
13
0
Order By: Relevance
“…We expect that the heterogeneity impact might be reduced by means of proportional load balancing during the selection phase [37], computation offloading techniques at the edge [50], adaptive compression of model for computation [3] and communication [2,4] or asynchronous mode of model updates [30].…”
Section: Discussionmentioning
confidence: 99%
“…We expect that the heterogeneity impact might be reduced by means of proportional load balancing during the selection phase [37], computation offloading techniques at the edge [50], adaptive compression of model for computation [3] and communication [2,4] or asynchronous mode of model updates [30].…”
Section: Discussionmentioning
confidence: 99%
“…In this context, device heterogeneity results in performance degradation due to stragglers (i.e., slow workers) who slow down the training process [48], [49]. Several works tried to address this problem via the system and algorithmic solutions [20], [39], [46], [48]- [50]. In FL settings, the heterogeneity is sourced from other system artifacts and is not limited to the heterogeneity in device capabilities.…”
Section: Related Workmentioning
confidence: 99%
“…Additionally, the effectiveness of SFL with privacy and resilience safeguards is assessed in more extensive experimental situations. • Heterogeneity: FL faces a considerable challenge when operating in various devices and data of the whole system [71][72][73]. Indeed, increasingly intelligent devices can connect to train the FL system.…”
Section: Challenges Of Federated Learningmentioning
confidence: 99%
“…Extensive experiments on MNIST, FashionMNIST, MedMNIST, and CIFAR-10 demonstrate that their suggested approaches can achieve satisfactory performance with guaranteed convergence and efficiently use all the resources available for training across different devices with lower communication cost than its homogeneous counterpart. Abdelmoniem, A.M., and Canini, M. [73] also concentrate on reducing the degree of device heterogeneity by suggesting AQFL, a straightforward and useful method that uses adaptive model quantization to homogenize the customers' computational resources. They assess AQFL using five standard FL metrics.…”
Section: Challenges Of Federated Learningmentioning
confidence: 99%