IEEE INFOCOM 2022 - IEEE Conference on Computer Communications 2022
DOI: 10.1109/infocom48880.2022.9796841
|View full text |Cite
|
Sign up to set email alerts
|

Protect Privacy from Gradient Leakage Attack in Federated Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 24 publications
(2 citation statements)
references
References 16 publications
0
2
0
Order By: Relevance
“…It effectively mitigates data loss risks stemming from single-point failures and ensures data integrity, shielding against tampering with malicious intent [5][6][7][8][9][10]. Nonetheless, this approach remains vulnerable to gradient leakage attacks (GLA) due to the direct sharing of gradients [11,12]. In response, many privacy-preserving federated learning (PPFL) schemes have emerged to fortify gradient privacy [13][14][15].…”
Section: Introductionmentioning
confidence: 99%
“…It effectively mitigates data loss risks stemming from single-point failures and ensures data integrity, shielding against tampering with malicious intent [5][6][7][8][9][10]. Nonetheless, this approach remains vulnerable to gradient leakage attacks (GLA) due to the direct sharing of gradients [11,12]. In response, many privacy-preserving federated learning (PPFL) schemes have emerged to fortify gradient privacy [13][14][15].…”
Section: Introductionmentioning
confidence: 99%
“…Since it has the potential to protect the private information, FL has been widely applied to a variety of application domains, such as healthcare [10,28], insurance industry [21], and Internet of Things (IoTs) [12,31,34]. Recent works have pointed out that FL is vulnerable to gradient leakage attacks (GLA) that try to reconstruct the training data from the publicly shared gradients with a central server [30,35]. To deal with this problem, one of commonly used defense strategies is differential privacy (DP) [11], which injects noise to the model parameters (weights or gradients) before they are uploaded to a central server [33].…”
Section: Introductionmentioning
confidence: 99%