2020
DOI: 10.48550/arxiv.2003.14053
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Inverting Gradients -- How easy is it to break privacy in federated learning?

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
155
0
1

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 68 publications
(157 citation statements)
references
References 0 publications
1
155
0
1
Order By: Relevance
“…However, the design of FL still needs the protection of parameters as well as investigations on the tradeoffs between the privacy-security-level and the system performance. The study [152] suggested that FL could expose intermediate results such as stochastic gradient descent, and the transmission of these gradients may actually leak private information when exposed together with a data structure. It is still possible for adversaries to reconstruct the raw data approximately, especially when the architecture and parameters are not completely protected.…”
Section: ) Privacy Protection For Sus In Ss Networkmentioning
confidence: 99%
“…However, the design of FL still needs the protection of parameters as well as investigations on the tradeoffs between the privacy-security-level and the system performance. The study [152] suggested that FL could expose intermediate results such as stochastic gradient descent, and the transmission of these gradients may actually leak private information when exposed together with a data structure. It is still possible for adversaries to reconstruct the raw data approximately, especially when the architecture and parameters are not completely protected.…”
Section: ) Privacy Protection For Sus In Ss Networkmentioning
confidence: 99%
“…Some attacks even allow an attacker to infer when a property appears and disappears in the dataset during the training process [34]. Sample inference attacks [106], [107] try to extract both the training data and their labels when attackers obtain model updates during the training phase. Recent work first adopt generate a dummy sample, then gradually reduce the distance between dummy simple and the grand truth through optimization algorithm [26], [108].…”
Section: B Privacy Threatsmentioning
confidence: 99%
“…Zhu [26] (De)centralized CV Zhao [108] Centralized CV Geiping [106] Centralized CV Yin [141] Centralized CV Dang [142] Centralized CV Jin [143] Federated CV Fu [144] ---Federated CV He [145] ----Federated CV al. observe that the aggregated gradient of an embedding layer is sparse with respect to the training text.…”
Section: Samplementioning
confidence: 99%
See 1 more Smart Citation
“…To reconstruct the samples given to a model, Zhu et al [1] propose Deep Leakage from Gradients (DLG), or Gradients Matching (GM), a method that iteratively updates a dummy training sample to minimize the distance of its gradient to a target gradient sent from a client. This has been used to successfully reconstruct large images in a mini-batch [5,6,7]. The application of GM has also been demonstrated in other domains, such as language modeling [1] and automatic speech recognition [8].…”
Section: Introductionmentioning
confidence: 99%