2020
DOI: 10.1109/mci.2020.2976185
|View full text |Cite
|
Sign up to set email alerts
|

A Survey on Differentially Private Machine Learning [Review Article]

Abstract: R ecent years have witnessed re markable successes of machine learning in various applications. However, machine learning models suffer from a potential risk of leaking private information contained in training data, which have attracted increasing research attention. As one of the mainstream priva cypreserving techniques, differential pri vacy provides a promising way to prevent the leaking of individuallevel privacy in training data while preserving the quality of training data for model building. This work … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
31
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 93 publications
(37 citation statements)
references
References 68 publications
(72 reference statements)
0
31
0
Order By: Relevance
“…, when λ is the noise parameter [21]. This mechanism mainly consists of two steps: perturbation statistics and correction [7,14].…”
Section: B Differential Privacy (Dp)mentioning
confidence: 99%
See 1 more Smart Citation
“…, when λ is the noise parameter [21]. This mechanism mainly consists of two steps: perturbation statistics and correction [7,14].…”
Section: B Differential Privacy (Dp)mentioning
confidence: 99%
“…More recently, due to the appetency to provide a privacy guarantee [7], DP naturally becomes a preferred tool for data privacy protection in the training process of Machine Learning (ML) models [8][9][10]. DP can provide privacy protection for the classic dimension reduction model, i.e., Matrix Factorization (MF) [11,12] and the application fields have been extended to recommender systems and social networks [13,14].…”
Section: Introductionmentioning
confidence: 99%
“…Serban et al elaborated on adversarial examples, including their construction, defense strategies, and transfer capabilities [26]. As differential privacy is one of the most effective measure for mitigating privacy breaches, Gong et al published a comprehensive review on differentially-private machine learning [27]. A number of surveys have also been conducted on DDMs, particularly CNNs and RNNs, see e.g., [28], [29], [30], [31], [32].…”
Section: Introductionmentioning
confidence: 99%
“…In order to improve the communication efficiency of FL-DP, Sonne and Rin [10] study FL over MAC with DP (FL-MAC-DP). However, the traditional DP have some limitations [29] in calculating the cumulative privacy loss. Then Rényi differential privacy (RDP) is proposed in [30], as a natural relaxing of traditional DP, it is suitable to express the privacy protection algorithm and the composition of heterogeneous mechanism.…”
Section: Introductionmentioning
confidence: 99%