2022
DOI: 10.1109/jsac.2021.3118410
|View full text |Cite
|
Sign up to set email alerts
|

Accelerated Gradient Descent Learning Over Multiple Access Fading Channels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4

Relationship

1
7

Authors

Journals

citations
Cited by 18 publications
(5 citation statements)
references
References 41 publications
0
2
0
Order By: Relevance
“…As a result, communication latency of such edge networks will be improved [93]. For the guarantee of the URLLC service, authors in [94] have proposed an accelerated gradient-descent multiple access algorithm. The proposed algorithm has contributed to optimize the model accuracy and training speed.…”
Section: Federated Edge Learning (Feel) Algorithmmentioning
confidence: 99%
“…As a result, communication latency of such edge networks will be improved [93]. For the guarantee of the URLLC service, authors in [94] have proposed an accelerated gradient-descent multiple access algorithm. The proposed algorithm has contributed to optimize the model accuracy and training speed.…”
Section: Federated Edge Learning (Feel) Algorithmmentioning
confidence: 99%
“…Similar ideas have been applied to Single Input Single Output (SISO) fading channels [13], and to Multiple Input Multiple Output (MIMO) channels [14]. More recent algorithms in this area include those described in [15,16,17,18,19] and the references therein.…”
Section: A Federated Learning and Over-the-air Federated Learningmentioning
confidence: 99%
“…Channel communication characteristics have been studied further in [31]. In our previous work [30], [35], we have developed and analyzed gradient-based learning, and accelerated learning methods without using power control or beamforming to cancel the fading effect. In [28], the authors developed the federated edge learning algorithm that schedules entries of the gradient vector based on the channel condition.…”
Section: A Learning With Ota Computationmentioning
confidence: 99%