2018
DOI: 10.48550/arxiv.1802.03471
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Certified Robustness to Adversarial Examples with Differential Privacy

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
96
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 31 publications
(101 citation statements)
references
References 0 publications
2
96
0
Order By: Relevance
“…There is another line of research that focuses on improving robustness to perturbations/adversarial attacks by noise injection. Among them, random self-ensemble [6,7] adds Gaussian noise to hidden states during both training and testing time. In training time, it works as a regularizer to prevent overfitting; in testing time, the random noise is also helpful, which will be explained in this paper.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…There is another line of research that focuses on improving robustness to perturbations/adversarial attacks by noise injection. Among them, random self-ensemble [6,7] adds Gaussian noise to hidden states during both training and testing time. In training time, it works as a regularizer to prevent overfitting; in testing time, the random noise is also helpful, which will be explained in this paper.…”
Section: Related Workmentioning
confidence: 99%
“…It has been shown that injecting small Gaussian noise can be viewed as a regularization in neural networks [4,5]. Furthermore, [6,7] recently showed that adding a slightly larger noise in one or all residual blocks can improve the adversarial robustness of neural networks. We will provide the stability analysis of (3) in Section 3.3, which provides a theoretical explanation towards the robustness of Neural SDE.…”
Section: Modeling Randomness In Neural Networkmentioning
confidence: 99%
See 1 more Smart Citation
“…While our framework does not cover this SDP relaxation, it is not clear to us how to extend the SDP relaxed verifier to general nonlinearities, for example max-pooling, which can be done in our framework on the other hand. Other verifiers have been proposed to certify via an intermediary step of bounding the local Lipschitz constant [Hein and Andriushchenko, 2017, Weng et al, 2018, Raghunathan et al, 2018a, Zhang et al, 2019, and others have used randomized smoothing to certify with high-probability [Lecuyer et al, 2018, Li et al, 2018, Cohen et al, 2019, Salman et al, 2019. These are outside the scope of our framework.…”
Section: Preliminaries and Related Workmentioning
confidence: 99%
“…A similar approach to [22] is [28] in which the authors use robust optimization to provide lower bounds on the norm of adversarial perturbations on the training data. In [16], the authors use techniques from Differential Privacy [7] in order to augment the training procedure of the classifier to improve robustness to adversarial inputs. Another approach using randomization is [17] in which the authors add i.i.d.…”
Section: Comparison To Related Workmentioning
confidence: 99%