2022
DOI: 10.1109/access.2022.3151670
|View full text |Cite
|
Sign up to set email alerts
|

Differential Privacy for Deep and Federated Learning: A Survey

Abstract: Users' privacy is vulnerable at all stages of the deep learning process. Sensitive information of users may be disclosed during data collection, or during training, or even after releasing the trained learning model. Differential privacy (DP) is one of the main approaches proven to ensure strong privacy protection in data analysis. DP protects the users' privacy by adding noise to the original dataset or the learning parameters. Thus, an attacker could not retrieve the sensitive information of an individual in… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 115 publications
(42 citation statements)
references
References 115 publications
0
26
0
Order By: Relevance
“…Both scenarios adopt differential privacy, homomorphic encryption, and secret-sharing solutions. Finally, El Ouadrhiri et al [ 27 ] review current techniques to protect user privacy in FL. They classify these techniques in three main groups: first, the techniques (k-anonymity, l-diversity, and t-closeness) that protect user privacy before making a dataset available.…”
Section: Related Workmentioning
confidence: 99%
“…Both scenarios adopt differential privacy, homomorphic encryption, and secret-sharing solutions. Finally, El Ouadrhiri et al [ 27 ] review current techniques to protect user privacy in FL. They classify these techniques in three main groups: first, the techniques (k-anonymity, l-diversity, and t-closeness) that protect user privacy before making a dataset available.…”
Section: Related Workmentioning
confidence: 99%
“…The concept of differential privacy was developed as a randomized mechanism to ensure output distribution. The approach focuses on promoting the elements included and removing access to samples engaged in the systems [24]. It enables some focus to prevent access to the type of sample used in the machine learning process.…”
Section: Federated Learning Security and Privacy Challengesmentioning
confidence: 99%
“…Differential privacy was proposed by Dwork et al, which provides an information-theoretic security guarantee that the output of a function is insensitive to any particular record in the dataset [13]. Since it provides a strict, quantifiable, and context-independent means of privacy protection, it has been widely used to enhance data privacy in machine learning (ML).…”
Section: Differential Privacymentioning
confidence: 99%
“…However, it has been shown that sharing original local updates damages users' privacy [7,8]. Concern on privacy issues in FL, many privacy-preserved methods, including secure multi-party computing [9], homomorphic encryption [10], differential privacy [11], and other privacy computing technologies, have been widely studied [12][13][14]. Differential privacy (DP) provides better protection because of its powerful theoretical guarantee and low computational overhead [15].…”
Section: Introductionmentioning
confidence: 99%