2022
DOI: 10.48550/arxiv.2202.06924
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Do Gradient Inversion Attacks Make Federated Learning Unsafe?

Abstract: Federated learning (FL) allows the collaborative training of AI models without needing to share raw data. This capability makes it especially interesting for healthcare applications where patient and data privacy is of utmost concern. However, recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data. In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and p… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
1
1
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 29 publications
0
5
0
Order By: Relevance
“…An overview of model inversion attack implementations and defense approaches is already described in prior work. 61 Advancements in these attacks continue to be put forward, and works that demonstrate such attacks in the setting of FL for medical models that demonstrate successful approximation of hidden batch normalization statistics, for example, 62 acknowledge the importance of understanding such threats in these settings. Membership inference: membership inference is the process by which an adversary has possession of a particular data sample and is attempting to infer whether it was included in the training set of the model.…”
Section: Threats To Privacy During Flmentioning
confidence: 99%
See 2 more Smart Citations
“…An overview of model inversion attack implementations and defense approaches is already described in prior work. 61 Advancements in these attacks continue to be put forward, and works that demonstrate such attacks in the setting of FL for medical models that demonstrate successful approximation of hidden batch normalization statistics, for example, 62 acknowledge the importance of understanding such threats in these settings. Membership inference: membership inference is the process by which an adversary has possession of a particular data sample and is attempting to infer whether it was included in the training set of the model.…”
Section: Threats To Privacy During Flmentioning
confidence: 99%
“…An overview of model inversion attack implementations and defense approaches is already described in prior work. 61 Advancements in these attacks continue to be put forward, and works that demonstrate such attacks in the setting of FL for medical models that demonstrate successful approximation of hidden batch normalization statistics, for example, 62 acknowledge the importance of understanding such threats in these settings.…”
Section: Threats To Privacy During Flmentioning
confidence: 99%
See 1 more Smart Citation
“…To inject class-specific impression to the synthetic images without violating privacy regulations (such as re-identification), we assume the averaged image of each class xt k (k ∈ Y t ) is available to initialize the optimization of x in Eq (1). It is worth noting that, Class Impression aims to generate images following the distributions of the past class, rather than reconstructing the training data points as model inversion attacks does [24,8,6], thus Class Impression aims to meet certain privacy requirements imposed on storing medical data.…”
Section: Class Impressionmentioning
confidence: 99%
“…Privacy in FL and PFL Privacy is a hot and significant topic in the age of medical big data [37]. Nevertheless, previous studies showed that FL is still vulnerable to attacks, such as data poisoning attack [38], membership inference attack [39]- [41], source inference attack (SIA) [42], attribute reconstruction attack [43], and inversion attack [44]- [47], thus compromising data privacy.…”
mentioning
confidence: 99%