2022
DOI: 10.48550/arxiv.2206.12395
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Data Leakage in Federated Averaging

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 0 publications
0
2
0
Order By: Relevance
“…While they only quantize the gradient to 16-bit and 8-bit integers, our NNC module uses an arbitrary number of quantization points, fine-tuned for each layer. These works are good indicators that gradient obfuscation techniques can be successfully employed to counteract attacks such as those proposed by Geiping et al [18] and Dimitrov et al [12].…”
Section: Data Security and Privacy Protectionmentioning
confidence: 91%
See 1 more Smart Citation
“…While they only quantize the gradient to 16-bit and 8-bit integers, our NNC module uses an arbitrary number of quantization points, fine-tuned for each layer. These works are good indicators that gradient obfuscation techniques can be successfully employed to counteract attacks such as those proposed by Geiping et al [18] and Dimitrov et al [12].…”
Section: Data Security and Privacy Protectionmentioning
confidence: 91%
“…Since FedAvg does not share the gradient but the updated local model, it is not vulnerable to this kind of attack. And still, Dimitrov et al [12] show that it is possible to reconstruct training images in realistic FedAvg settings. Despite the method's success with a single client relying on many local training rounds, attacking aggregated parameter updates from multiple clients, even if only a few them are used, significantly degrades the reconstruction performance.…”
Section: Data Security and Privacy Protectionmentioning
confidence: 99%