2020
DOI: 10.3390/electronics9030440
|View full text |Cite
|
Sign up to set email alerts
|

BACombo—Bandwidth-Aware Decentralized Federated Learning

Abstract: The emerging concern about data privacy and security has motivated the proposal of federated learning. Federated learning allows computing nodes to only synchronize the locally- trained models instead of their original data in distributed training. Conventional federated learning architecture, inherited from the parameter server design, relies on highly centralized typologies and large nodes-to-server bandwidths. However, in real-world federated learning scenarios, the network capacities between nodes are high… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
21
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 36 publications
(22 citation statements)
references
References 12 publications
1
21
0
Order By: Relevance
“…However, their design also requires a fully connected network topology and a prerequisite of randomly distributed data among the workers. While in [27], the authors further extend the Combo into a bandwidth aware solution BACombo by greedily choosing the bandwidth-sufficient worker to reduce the transmission delay. In [28], the authors design an experimental study to compare federated learning with gossip learning, and find that gossip learning is comparable to federated learning in their result.…”
Section: Decentralized Federated Learning Implementationmentioning
confidence: 99%
“…However, their design also requires a fully connected network topology and a prerequisite of randomly distributed data among the workers. While in [27], the authors further extend the Combo into a bandwidth aware solution BACombo by greedily choosing the bandwidth-sufficient worker to reduce the transmission delay. In [28], the authors design an experimental study to compare federated learning with gossip learning, and find that gossip learning is comparable to federated learning in their result.…”
Section: Decentralized Federated Learning Implementationmentioning
confidence: 99%
“…However, it suffers slow convergence when the training data is not identically distributed [23] and the centralised architecture will bring the network congestion in the global server. To reduce the network congestion, the decentralised methods [11, 12, 24] are proposed, every device is connected over WAN, and there is no global server to aggregate the updates. The former work exchanged the partial gradients with all the other devices, while the method in [12] exchanges the partial model weights with a subset of total devices.…”
Section: Related Workmentioning
confidence: 99%
“…Jiang et al . [24] further proposed a bandwidth‐aware device selection method to reduce the communication latency. Although their improvement in communication efficiency, these methods are based on the local updating (SGD) and simply global averaging, leading to the limits of the convergence rate of SGD.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…The recent development of machine learning evolves towards the distributed environment. One of the promising solutions is so-called federated learning that trains user data on the device and exchanges only training models and their update information [5][6][7]. In federated learning, the central server computes only the aggregated average of the training data gathered from each device.…”
Section: Introductionmentioning
confidence: 99%