2022
DOI: 10.1109/lcomm.2022.3199490
|View full text |Cite
|
Sign up to set email alerts
|

Latency-Efficient Wireless Federated Learning With Quantization and Scheduling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(1 citation statement)
references
References 16 publications
0
1
0
Order By: Relevance
“…Nevertheless, FL faces notable hurdles in terms of communication efficiency as the number of participating devices and model parameters continues to grow [9], [10]. To address this issue, the majority of research efforts have been directed towards parameter compression [11]- [13], device selection [13]- [16], and power/bandwidth allocation algorithms [17]- [19] within the FL framework. Despite these endeavors, the conventional FL paradigms continue to rely on a central node for parameter aggregation, leading to heightened communication overhead and vulnerability associated with a single point of failure [20], [21].…”
Section: Introductionmentioning
confidence: 99%
“…Nevertheless, FL faces notable hurdles in terms of communication efficiency as the number of participating devices and model parameters continues to grow [9], [10]. To address this issue, the majority of research efforts have been directed towards parameter compression [11]- [13], device selection [13]- [16], and power/bandwidth allocation algorithms [17]- [19] within the FL framework. Despite these endeavors, the conventional FL paradigms continue to rely on a central node for parameter aggregation, leading to heightened communication overhead and vulnerability associated with a single point of failure [20], [21].…”
Section: Introductionmentioning
confidence: 99%