2022
DOI: 10.48550/arxiv.2206.00769
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Defense Against Gradient Leakage Attacks via Learning to Obscure Data

Abstract: Federated learning is considered as an effective privacy-preserving learning mechanism that separates the client's data and model training process. However, federated learning is still under the risk of privacy leakage because of the existence of attackers who deliberately conduct gradient leakage attacks to reconstruct the client data. Recently, popular strategies such as gradient perturbation and input encryption have been proposed to defend against gradient leakage attacks. Nevertheless, these defenses can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 20 publications
(54 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?