2021
DOI: 10.1109/jsac.2020.3036952
|View full text |Cite
|
Sign up to set email alerts
|

Fast-Convergent Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
57
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 164 publications
(67 citation statements)
references
References 11 publications
1
57
0
Order By: Relevance
“…Thus, without considering this issue at the time of updating the global model, the updated global model will be biased towards local models of devices that are closer to the server. Existing works [35,36,37,38,39,40] assume that transmissions are always successful, which is not correct in reality.…”
Section: Problemmentioning
confidence: 99%
See 2 more Smart Citations
“…Thus, without considering this issue at the time of updating the global model, the updated global model will be biased towards local models of devices that are closer to the server. Existing works [35,36,37,38,39,40] assume that transmissions are always successful, which is not correct in reality.…”
Section: Problemmentioning
confidence: 99%
“…However, it was assumed that all of the devices participate in the aggregation step, which is obviously not possible when the number of available resource blocks is limited. To tackle this problem, in [35,36,37,38,39,40] at the beginning of each round, the server samples a subset of devices and allocates the available resource blocks to these devices. After performing local update steps, the BS aggregates the local models of the chosen (scheduled) devices and updates the global model.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Beyond systems heterogeneity of the FL clients, statistical heterogeneity or divergence of client model update is also a concern in federated networks. Some recent FL works (Dinh et al, 2019;Wang et al, 2019;Chen et al, 2020a;Guo et al, 2020;Nguyen et al, 2020) analyze how to guarantee convergence both in theoretically and empirically for an FL setting. The major problem is that they assume all FL clients are resource capable to perform a predefined uniform number of iterations while considering all the devices to participate in the training round.…”
Section: Background and Related Workmentioning
confidence: 99%
“…Reference [8] broadens the previous work to include non-IID data, and they allow the server to act as one of the agents; the server collects from the selected agents some of their data and runs SGD on them. In [9], a non-uninform sampling scheme of agents is considered; the probability distribution of the sampling of agents is calculated by maximizing the inner product between the global and local gradients. They provide an approximate solution, since the actual calculations of the sampling probabilities is non-trivial.…”
Section: Introduction and Related Materialsmentioning
confidence: 99%