2019
DOI: 10.48550/arxiv.1909.09145
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Detailed comparison of communication efficiency of split learning and federated learning

Abstract: We compare communication efficiencies of two compelling distributed machine learning approaches of split learning and federated learning. We show useful settings under which each method outperforms the other in terms of communication efficiency. We consider various practical scenarios of distributed learning setup and juxtapose the two methods under various real-life scenarios. We consider settings of small and large number of clients as well as small models (1M -6M parameters), large models (10M -200M paramet… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
43
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(43 citation statements)
references
References 1 publication
0
43
0
Order By: Relevance
“…It simply discards the gradients it received from the second part of the client model, and computes a malicious loss function using the intermediate output it received from the first client model, propagating the malicious loss back to the first client model. The primary advantage of SplitNN compared to federated learning is its lower communication load [18]. While federated learning clients have to share their entire parameter updates with the server, SplitNN clients only share the output of a single layer.…”
Section: B Split Learningmentioning
confidence: 99%
“…It simply discards the gradients it received from the second part of the client model, and computes a malicious loss function using the intermediate output it received from the first client model, propagating the malicious loss back to the first client model. The primary advantage of SplitNN compared to federated learning is its lower communication load [18]. While federated learning clients have to share their entire parameter updates with the server, SplitNN clients only share the output of a single layer.…”
Section: B Split Learningmentioning
confidence: 99%
“…The communication overhead of split learning linearly scales with the amount of training data at the client [10]. While split learning has less communication overhead than federated learning [5] when the data size is small, it is a bottleneck if the data size is large.…”
Section: Motivationmentioning
confidence: 99%
“…However, the communication overhead linearly increases with the number of training samples. In the extreme case, where the number of edge devices is small and each edge device has to process a large amount of data, communication overhead can be way higher than federated learning [10], [11].…”
Section: Introductionmentioning
confidence: 99%
“…However, the federated learning (data sharing) constraint means that the GNN cannot be trained in a centralized manner, since each node can only access the data stored on itself. To address this, CNFGNN employs Split Learning [25] to train the spatial and temporal modules. Further, to alleviate the associated high communication cost incurred by Split Learning, we propose an alternating optimization-based training procedure of these modules, which incurs only half the communication overhead as compared to a comparable Split Learning architecture.…”
Section: Introductionmentioning
confidence: 99%