2019 IEEE International Conference on Big Data (Big Data) 2019
DOI: 10.1109/bigdata47090.2019.9006216
|View full text |Cite
|
Sign up to set email alerts
|

Gossip Learning: Off the Beaten Path

Abstract: The growing computational demands of model training tasks and the increased privacy awareness of consumers call for the development of new techniques in the area of machine learning. Fully decentralized approaches have been proposed, but are still in early research stages. This study analyses gossip learning, one of these state-of-the-art decentralized machine learning protocols, which promises high scalability and privacy preservation, with the goal of assessing its applicability to realworld scenarios.Previo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
16
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

3
5

Authors

Journals

citations
Cited by 25 publications
(16 citation statements)
references
References 16 publications
0
16
0
Order By: Relevance
“…However, this may not be the case in many scenarios, where a single node might have multiple useful data points, such as image classification and our case. To this respect, the study in [44] shows that training on multiple data points provides a clear advantage over the original protocol, as models see more data in the same number of iterations, and thus converge faster. Therefore, we consider the extension proposed in [44], which propose to call multiple times the SGD on different data points, i.e., implementing at each node multiple training steps sequentially every loop cycle.…”
Section: B Gossip Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…However, this may not be the case in many scenarios, where a single node might have multiple useful data points, such as image classification and our case. To this respect, the study in [44] shows that training on multiple data points provides a clear advantage over the original protocol, as models see more data in the same number of iterations, and thus converge faster. Therefore, we consider the extension proposed in [44], which propose to call multiple times the SGD on different data points, i.e., implementing at each node multiple training steps sequentially every loop cycle.…”
Section: B Gossip Learningmentioning
confidence: 99%
“…To this respect, the study in [44] shows that training on multiple data points provides a clear advantage over the original protocol, as models see more data in the same number of iterations, and thus converge faster. Therefore, we consider the extension proposed in [44], which propose to call multiple times the SGD on different data points, i.e., implementing at each node multiple training steps sequentially every loop cycle. In particular, we consider to perform the local training on the whole dataset generated by one BS in one loop cycle, thus the maximum number of steps of the algorithm will be the number of BSs.…”
Section: B Gossip Learningmentioning
confidence: 99%
“…Compared to our approach, specialization for gossip learning can be reliant on the availability of related peers in the network which could slow down convergence speed [18]. Moreover, many gossip learning algorithms do not consider robustness against poisoning attacks.…”
Section: Decentralized Gossip Learningmentioning
confidence: 99%
“…Unfortunately, while gossip learning has been shown to be applicable to many different ML workloads [22][23][24], no study that we are aware of tested it in actual physical environments or evaluated its practical use for large-scale deep learning training. However, recent studies suggest that gossip learning compares favorably to federated learning [25] and that it can be extended to work in constrained and highly heterogeneous environments [26].…”
Section: Decentralized Machine Learningmentioning
confidence: 99%