2017 IEEE International Conference on Data Mining (ICDM) 2017
DOI: 10.1109/icdm.2017.48
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive Laplace Mechanism: Differential Privacy Preservation in Deep Learning

Abstract: In this paper, we focus on developing a novel mechanism to preserve differential privacy in deep neural networks, such that: (1) The privacy budget consumption is totally independent of the number of training steps; (2) It has the ability to adaptively inject noise into features based on the contribution of each to the output; and (3) It could be applied in a variety of different deep neural networks. To achieve this, we figure out a way to perturb affine transformations of neurons, and loss functions used in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
97
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 153 publications
(97 citation statements)
references
References 27 publications
(98 reference statements)
0
97
0
Order By: Relevance
“…There are several common noise perturbation mechanisms for differential privacy that mask the original datasets or intermediate results during the training process of models: the Laplace mechanism, the exponential mechanism, and the Gaussian mechanism. Phan et al (2017) developed a novel mechanism that injects Laplace noise into the computation of Layer-Wise Relevance Propagation (LRP) to preserve differential privacy in deep learning. Chaudhuri et al (2011, 2013) adopted the exponential mechanism as a privacy-preserving tuning method by training classifiers with different parameters on disjoint subsets of the data and then randomizing the selection of which classifier to release.…”
Section: Differential Privacymentioning
confidence: 99%
“…There are several common noise perturbation mechanisms for differential privacy that mask the original datasets or intermediate results during the training process of models: the Laplace mechanism, the exponential mechanism, and the Gaussian mechanism. Phan et al (2017) developed a novel mechanism that injects Laplace noise into the computation of Layer-Wise Relevance Propagation (LRP) to preserve differential privacy in deep learning. Chaudhuri et al (2011, 2013) adopted the exponential mechanism as a privacy-preserving tuning method by training classifiers with different parameters on disjoint subsets of the data and then randomizing the selection of which classifier to release.…”
Section: Differential Privacymentioning
confidence: 99%
“…Their work was based on the prediction of human behavior in health social networks. [8] proposed an adaptive Laplace mechanism by combining differential privacy and layer-wise relevance propagation (LRP). The relevance obtained through the LRP process was altered with calculated noise to preserve privacy in the training model.…”
Section: Review Of Existing Privacy Preserving Modelsmentioning
confidence: 99%
“…The neuron contribution ratio is originally adapted from Layer-wise Relevance Propagation (LRP): an approach which has been proposed and adopted in literature [8,13,19,20]. The contribution ratio propagation involves a forward flow process that produces F θ at the output layer and a backward flow process that feeds back F θ as Crz into the network in a reversed manner, as presented in [20].…”
Section: Neuron Contribution Ratio Propagationmentioning
confidence: 99%
See 1 more Smart Citation
“…Consider any belief distribution b. Let the posterior distributions b 0 [x i+S |y] and b i [x i+S |y] for some fixed i, S and y be defined in (9) and (15). From (19), ǫ-Bayesian differential privacy implies that for every z i+S ,…”
Section: Theorem 1 (Restated)mentioning
confidence: 99%