2022
DOI: 10.48550/arxiv.2202.00580
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Fishing for User Data in Large-Batch Federated Learning via Gradient Magnification

Abstract: Federated learning (FL) has rapidly risen in popularity due to its promise of privacy and efficiency. Previous works have exposed privacy vulnerabilities in the FL pipeline by recovering user data from gradient updates. However, existing attacks fail to address realistic settings because they either 1) require a 'toy' settings with very small batch sizes, or 2) require unrealistic and conspicuous architecture modifications. We introduce a new strategy that dramatically elevates existing attacks to operate on b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(4 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…As an example, [75] leverages a GAN (Generative Adversarial Network) [76] to reconstruct other users' personal data, while [77] uses a GAN to ensure the quality of the data that the attacker aims to reconstruct and [78] tries to infer characteristics of the clients with ad-hoc classifiers. Additionally, a malicious server might put in place label and feature fishing attacks by intentionally modifying some parameters of the global model [79]. For an in-depth discussion on threats and attacks in FL, we refer the readers to [32], [33].…”
Section: Privacy In Federated Learningmentioning
confidence: 99%
“…As an example, [75] leverages a GAN (Generative Adversarial Network) [76] to reconstruct other users' personal data, while [77] uses a GAN to ensure the quality of the data that the attacker aims to reconstruct and [78] tries to infer characteristics of the clients with ad-hoc classifiers. Additionally, a malicious server might put in place label and feature fishing attacks by intentionally modifying some parameters of the global model [79]. For an in-depth discussion on threats and attacks in FL, we refer the readers to [32], [33].…”
Section: Privacy In Federated Learningmentioning
confidence: 99%
“…As our research progresses we find that increasing the batch size is no longer effective in preventing privacy breaches. The gradient attack algorithm proposed in the literature [78] can obtain individual gradients in arbitrarily large aggregated batches and, crucially, it applies to arbitrary models. The gradient attack algorithm is now able to reconstruct image information with a resolution of 224 × 224 and steals information with increasing accuracy.…”
Section: Figure 3: Research History Of Gradient Leakagementioning
confidence: 99%
“…Or in federated learning, a malicious server can select model architectures that enable reconstructing training samples [9,20]. Alternatively, participants in decentralized learning protocols can boost privacy attacks by sending dynamic malicious updates [44,51,69]. Our work differs from these in only requiring the weak assumption that the attacker can add a small amount of arbitrary data to the training set once, without contributing to any other part of training thereafter.…”
Section: Attacks On Training Integritymentioning
confidence: 99%