IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2022
DOI: 10.1109/infocom48880.2022.9796733
|View full text |Cite
|
Sign up to set email alerts
|

Joint Superposition Coding and Training for Federated Learning over Multi-Width Neural Networks

Abstract: This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN) architectures. FL preserves data privacy by exchanging the locally trained models of mobile devices. By adopting SNNs as local models, FL can flexibly cope with the time-varying energy capacities of mobile devices. Combining FL and SNNs is however non-trivial, particularly under wireless connections with time-varying channel conditions. Furthermore, existing multi-width SNN trai… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 12 publications
0
4
0
Order By: Relevance
“…Besides, FedMask proposes to train a personalized mask for each device to improve the test accuracy on the local dataset [31]. Recently, model structure pruning enables multiple devices with different model architectures to train a shared global model [32], [33]. Such methods can reduce the cost of local training, but how to customize optimal training strategies (e.g., gradient compression and model pruning policy) for different learning scenarios is still unknown.…”
Section: Related Workmentioning
confidence: 99%
“…Besides, FedMask proposes to train a personalized mask for each device to improve the test accuracy on the local dataset [31]. Recently, model structure pruning enables multiple devices with different model architectures to train a shared global model [32], [33]. Such methods can reduce the cost of local training, but how to customize optimal training strategies (e.g., gradient compression and model pruning policy) for different learning scenarios is still unknown.…”
Section: Related Workmentioning
confidence: 99%
“…However, the architectures of local and global models are still restricted by the same model architecture. SlimFL [30] integrates slimmable neural network (SNN) architectures [31] into FL, adapting the widths of local neural networks based on resource limitations. In [32], FjORD leverages Ordered Dropout and a self-distillation method to determine the model widths.…”
Section: B Heterogeneous Modelsmentioning
confidence: 99%
“…One of the major previous research results on self-driving and autonomous driving is rule-based driving policy optimization. However, the policy has serious problems for coping with timevarying environments, i.e., extremely large observation spaces and action spaces which can introduce high computational complexity for training [11]- [16]. Recently, deep reinforcement learning (DRL) based algorithms have been proposed that utilize powerful function approximations such as neural networks, allow the vehicular supervisor to train robust driving policies [17]- [23].…”
Section: Introductionmentioning
confidence: 99%