2020
DOI: 10.1609/aaai.v34i04.5798
|View full text |Cite
|
Sign up to set email alerts
|

Regularized Training and Tight Certification for Randomized Smoothed Classifier with Provable Robustness

Abstract: Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against ℓ2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed c… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 7 publications
(8 reference statements)
0
8
0
Order By: Relevance
“…4) Robust Training for Randomized Models [24,65,95,19,71,54,129,43,5,130,128]: This subsection mainly summarizes robust training approaches for randomized models, along with other efforts towards robust randomized models. Data augmentation.…”
Section: F Probabilistic Robustness Verificationmentioning
confidence: 99%
See 2 more Smart Citations
“…4) Robust Training for Randomized Models [24,65,95,19,71,54,129,43,5,130,128]: This subsection mainly summarizes robust training approaches for randomized models, along with other efforts towards robust randomized models. Data augmentation.…”
Section: F Probabilistic Robustness Verificationmentioning
confidence: 99%
“…MACER [129] derives a regularization term which directly maximizes the certified robustness. ADRE [43] proposes another regularizer to penalize the misclassified samples and improve the certified robustness of the correctly predicted samples. Adversarial training.…”
Section: F Probabilistic Robustness Verificationmentioning
confidence: 99%
See 1 more Smart Citation
“…The certificate then determines the radius in which p a (x ) will remain above 0.5: this guarantees that a will remain the top class, regardless of the other logits. While some works (Lecuyer et al, 2019;Feng et al, 2020) independently estimate each smoothed logit, this incurs additional estimation error as the number of classes increases. In this work, we assume that only estimates for the top-class smoothed logit p a (x) and its gradient ∇ x p a (x) are available (although we briefly discuss the case with more estimated logits in Section 3.2).…”
Section: Preliminaries Assumptions and Notationmentioning
confidence: 99%
“…If estimates and gradients are available for multiple classes, it would then be possible to achieve an even larger certificate, by setting the lower bound of the top logit equal to the upper bounds of each of the other logits. Note, however, that unlike first-order smoothing works (Lecuyer et al, 2019;Feng et al, 2020) which use this approach, it is not sufficient to compare against just the "'runner-up" class, because other logits may have less restrictive upper-bounds due to having larger gradients. As discussed above, gradient norm estimation can be computationally expensive, so gradient estimation for many classes may not be feasible.…”
Section: Upper-bound and Multi-class Certificatesmentioning
confidence: 99%