2021
DOI: 10.48550/arxiv.2102.01593
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

FEDZIP: A Compression Framework for Communication-Efficient Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
7
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 0 publications
0
7
0
Order By: Relevance
“…The optimization of the hyper-parameters using Genetic CFL provides higher throughput for comparatively less number of rounds. Our architecture, Genetic CFL outperforms both algorithms [25] and [26] in accuracy and rounds. This holds up the fact that Genetic CFL architecture performs better while taking less number of rounds.…”
Section: Performance Analysis Of Genetic Cflmentioning
confidence: 89%
“…The optimization of the hyper-parameters using Genetic CFL provides higher throughput for comparatively less number of rounds. Our architecture, Genetic CFL outperforms both algorithms [25] and [26] in accuracy and rounds. This holds up the fact that Genetic CFL architecture performs better while taking less number of rounds.…”
Section: Performance Analysis Of Genetic Cflmentioning
confidence: 89%
“…Then, they integrate additively homomorphic encryption with differential privacy to prevent data from being leaked. Malekijoo et al [75] develop a novel framework that significantly decreases the size of updates while transferring weights from the deep learning model between clients and their servers. A novel algorithm, namely FetchSGD, that compresses model updates using a Count Sketch, and takes advantage of the mergeability of sketches to combine model updates from many workers, is proposed by [76].…”
Section: Method's Category Sub-categories Studies Pros and Consmentioning
confidence: 99%
“…In addition to ordinary data compression algorithms used to encode the final updates with lower amounts of bits, such as Huffman encoding used by [7,43], data size reduction is achieved by several other methods, such as update quantization, sparsification, and sketching. The goal of all these techniques is to reduce the amount of bits per round NTbit, sent through the wireless interface, which subsequently decreases significantly the energy usage for exchanging model's updates (equation 7).…”
Section: Updates Compressionmentioning
confidence: 99%
“…It implements a layer-wise weight quantization with an adjustable threshold during the training, which has the additional benefit of reducing the training tasks' energy budget. Similarly, [27,43,47,54] used quantization for data size reduction, in most cases mixed with other techniques. Furthermore, [27,44] proposed an adaptive schema for updating quantization to achieve communication-efficient training.…”
Section: Updates Compressionmentioning
confidence: 99%