The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2021
DOI: 10.48550/arxiv.2104.12416
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Communication-Efficient Federated Learning with Dual-Side Low-Rank Compression

Abstract: Federated learning (FL) is a promising and powerful approach for training deep learning models without sharing the raw data of clients. During the training process of FL, the central server and distributed clients need to exchange a vast amount of model information periodically. To address the challenge of communication-intensive training, we propose a new training method, referred to as federated learning with dualside low-rank compression (FedDLR), where the deep learning model is compressed via low-rank app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 13 publications
0
5
0
Order By: Relevance
“…The server then averages all received sub-sampled updates to get an estimate of the global model parameters. Additionally, [52] used dual-side low-rank compression to reduce the size of the models in both directions between the server and the nodes. Finally, [36] used a layer-based parameter selection in order to transfer only the important parameters of each model's layer.…”
Section: Updates Compressionmentioning
confidence: 99%
“…The server then averages all received sub-sampled updates to get an estimate of the global model parameters. Additionally, [52] used dual-side low-rank compression to reduce the size of the models in both directions between the server and the nodes. Finally, [36] used a layer-based parameter selection in order to transfer only the important parameters of each model's layer.…”
Section: Updates Compressionmentioning
confidence: 99%
“…FL enables a multitude of participants to construct a joint model without sharing their private training data [4,22,23,25]. Some recent work focus on compressing the parameters or transmitting partial network for efficient transmission [2,20,28,30]. However, they do not reduce the computation overhead on edge.…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, directly training the low-rank model from scratch [12] results in the performance drop. FedDLR [28] factorizes the server model and recovers the low-rank model on the clients, reducing the communication cost while increasing the local computation cost. Pufferfish [32] improves the performance of the low-rank model by training a hybrid network and warm-up.…”
Section: Related Workmentioning
confidence: 99%
“…In this regard, an important task in SWIPT-based FL is to optimize the portion of harvested energy allocated to communication with the edge server and local computation, respectively. In addition, the integration of SWIPT and conventional energy-saving techniques in FL, e.g., model compression, adaptive transmission, and hierarchical FL [115], needs further investigation.…”
Section: Mobile Edge Computing and Federated Learningmentioning
confidence: 99%
“…Conventional design methodologies based on mathematical optimization may not be directly applicable to large-scale SWIPT networks as the complexity of the optimal designs often scales exponentially with the network size. As a remedy, ML is a promising tool as it can be used to optimize large-scale systems without relying on analytical models [115], [116]. Recently, several papers have exploited ML for solving communication-centric resource allocation design problems in conventional wireless networks [117]- [119].…”
Section: E Machine Learning-based Designmentioning
confidence: 99%