2021
DOI: 10.48550/arxiv.2111.04706
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Bayesian Framework for Gradient Leakage

Abstract: Federated learning is an established method for training machine learning models without sharing training data. However, recent work has shown that it cannot guarantee data privacy as shared gradients can still leak sensitive information. To formalize the problem of gradient leakage, we propose a theoretical framework that enables, for the first time, analysis of the Bayes optimal adversary phrased as an optimization problem. We demonstrate that existing leakage attacks can be seen as approximations of this op… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 7 publications
(16 reference statements)
0
7
0
Order By: Relevance
“…Moreover, recent work achieved near-perfect image reconstruction from gradients (Geiping et al, 2020;Yin et al, 2021;Jeon et al, 2021). Interestingly, prior work showed that an auxiliary model (Jeon et al, 2021) or prior information (Balunović et al, 2021) can significantly improve reconstruction quality. Finally, Huang et al (2021) recently noticed that gradient leakage attacks often make strong assumptions, namely that batch normalization statistics and ground truth labels are known.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Moreover, recent work achieved near-perfect image reconstruction from gradients (Geiping et al, 2020;Yin et al, 2021;Jeon et al, 2021). Interestingly, prior work showed that an auxiliary model (Jeon et al, 2021) or prior information (Balunović et al, 2021) can significantly improve reconstruction quality. Finally, Huang et al (2021) recently noticed that gradient leakage attacks often make strong assumptions, namely that batch normalization statistics and ground truth labels are known.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, there have been several works attempting to protect against gradient leakage. Works based on heuristics (Sun et al, 2021;Scheliga et al, 2021) lack privacy guarantees and have been shown ineffective against stronger attacks (Balunović et al, 2021), while those based on differential privacy do train models with formal privacy guaran-tees (Abadi et al, 2016), but this typically hurts the accuracy of the trained models as it requires adding noise to the gradients. We remark that in our work we also evaluate our attack on defended networks and demonstrate its effectiveness.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Recently proposed gradient leakage attack methods [34,6,28,2] have shown that a malicious attacker is capable of reconstructing clients' local data by exploiting the shared gradients or model updates. For example, the work [34] searches input data samples which have gradients with minimal Euclidean distance to the true gradients shared by clients.…”
Section: Related Workmentioning
confidence: 99%
“…Specifically, our method is motivated by [11,26], which study the prediction mechanism of Deep Neural Networks (DNNs). In specific, for image classification, these studies find that DNNs tend to use two main kinds of features in images for prediction: (1) features which are obviously comprehensible by human eyes, such as the presence of "a tail" or "ears" in the images of "cat"; as well as (2) features which are incomprehensible to human but also very helpful for DNNs prediction. Both types of features make a great contribution for DNN models to make correct predictions.…”
Section: Introductionmentioning
confidence: 99%