2021
DOI: 10.48550/arxiv.2112.13178
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Gradient Leakage Attack Resilient Deep Learning

Abstract: Gradient leakage attacks are considered one of the wickedest privacy threats in deep learning as attackers covertly spy gradient updates during iterative training without compromising model training quality, and yet secretly reconstruct sensitive training data using leaked gradients with high attack success rate. Although deep learning with differential privacy is a defacto standard for publishing deep learning models with differential privacy guarantee, we show that differentially private algorithms with fixe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
0
0

Publication Types

Select...

Relationship

0
0

Authors

Journals

citations
Cited by 0 publications
references
References 26 publications
(75 reference statements)
0
0
0
Order By: Relevance

No citations

Set email alert for when this publication receives citations?