2021
DOI: 10.48550/arxiv.2107.08461
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Differentially Private Bayesian Neural Networks on Accuracy, Privacy and Reliability

Abstract: Bayesian neural network (BNN) allows for uncertainty quantification in prediction, offering an advantage over regular neural networks that has not been explored in the differential privacy (DP) framework. We fill this important gap by leveraging recent development in Bayesian deep learning and privacy accounting to offer a more precise analysis of the trade-off between privacy and accuracy in BNN. We propose three DP-BNNs that characterize the weight uncertainty for the same network architecture in distinct wa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 42 publications
(50 reference statements)
0
5
0
Order By: Relevance
“…Examples of Bayesian neural network papers from major journals and conferences are reviewed in Table 4. 80,[91][92][93][94][95][96][97]…”
Section: Applied Soft Computingmentioning
confidence: 99%
“…Examples of Bayesian neural network papers from major journals and conferences are reviewed in Table 4. 80,[91][92][93][94][95][96][97]…”
Section: Applied Soft Computingmentioning
confidence: 99%
“…The algorithm, while keeping the time spent on robust training almost equal to the non-robust ones, also produces robust models. Instead of perturbing the dataset, we apply the Snapshot Ensemble [Huang et al, 2017a, Smith, 2017, Loshchilov and Hutter, 2016 along the training process to store multiple historical weights so as to defend against adversarial attacks such as FGSM or PGD, in a Bayesian Neural Network [Blundell et al, 2015, Kingma et al, 2015, Ru et al, 2019, Zhang et al, 2021 manner. The proposed method produces results as fast as the non-robust networks, with only a few seconds difference when trained on CIFAR-10 [ Krizhevsky et al, 2009] and MNIST [LeCun, 1998] datasets.…”
Section: Contributionsmentioning
confidence: 99%
“…Differentially private posterior sampling has recently gained attractions, with the methods falling into three categories: DP Metropolis-Hastings (Heikkilä et al, 2019;Yıldırım & Ermis ¸, 2019), DP Gradient-based MCMC (Li et al, 2019;Zhang et al, 2021;Räisä et al, 2021), and DP Exact Posterior Sampling (Dimitrakakis et al, 2014;Wang et al, 2015;Foulds et al, 2016;Zhang et al, 2016) DP Metropolis-Hastings Differentially private Metropolis-Hastings has been developed in Heikkilä et al (2019); Yıldırım & Ermis ¸(2019). Heikkilä et al (2019) gives an algorithm based on the Barker acceptance test, which can be used with a minibatch of data.…”
Section: Related Workmentioning
confidence: 99%
“…DP Gradient-based MCMC Another line of work studies (stochastic) gradient-based MCMC methods (Li et al, 2019;Zhang et al, 2021;Räisä et al, 2021). Differentially private Stochastic Gradient MCMC (SGMCMC), including Stochastic Gradient Langevin Dynamics (SGLD) (Welling & Teh, 2011) and other variants, are shown to satisfy differential privacy guarantees by default.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation