2022
DOI: 10.1002/int.22997
|View full text |Cite
|
Sign up to set email alerts
|

An effective and practical gradient inversion attack

Abstract: While gradient aggregation playing a vital role in federated or collaborative learning, recent studies have revealed that gradient aggregation may suffer from some attacks, such as gradient inversion, where the private training data can be recovered from the shared gradients. However, the performance of the existing attack methods is limited because they usually require prior knowledge in Batch Normalization and could only reconstruct a single image or a small batch one. To make the attacks less restrictive an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 35 publications
(88 reference statements)
0
2
0
Order By: Relevance
“…For FL, Zhu et al [174] demonstrated how can we obtain the training data from shared gradients by developing a Deep Leakage from the gradients. Luo et al [85] presented another idea of gradient inversion by reconstructing training data. Using transfer learning, Ye et al [159] proved that the inversion attacks fall apart when we target the student model.…”
Section: Model Inversion Attackmentioning
confidence: 99%
“…For FL, Zhu et al [174] demonstrated how can we obtain the training data from shared gradients by developing a Deep Leakage from the gradients. Luo et al [85] presented another idea of gradient inversion by reconstructing training data. Using transfer learning, Ye et al [159] proved that the inversion attacks fall apart when we target the student model.…”
Section: Model Inversion Attackmentioning
confidence: 99%
“…Te virtual data are learned using an optimization algorithm in such a way that the gradient obtained by backpropagation on the common model is similar to the real gradient, and the training data and labels are obtained after several rounds of iterative optimization. At the moment, this is one of the hottest topics in the study of variants of DLG-based methods [9,[52][53][54].…”
Section: Gradient Update-based Data Leakagementioning
confidence: 99%