2018
DOI: 10.48550/arxiv.1801.09344
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Certified Defenses against Adversarial Examples

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

1
177
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 146 publications
(182 citation statements)
references
References 34 publications
1
177
0
Order By: Relevance
“…Along with a prediction on the test point, these defenses output a certified radius r such that for any ||δ|| 2 < r, the model continues to have the same prediction. Such techniques include convex polytope [52], recursive propagation [16], and linear relaxation [42,59]. These methods provide a lower bound on the perturbation required to change the model's prediction on a target point.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Along with a prediction on the test point, these defenses output a certified radius r such that for any ||δ|| 2 < r, the model continues to have the same prediction. Such techniques include convex polytope [52], recursive propagation [16], and linear relaxation [42,59]. These methods provide a lower bound on the perturbation required to change the model's prediction on a target point.…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, a large body of work in this direction has fallen into the cycle where new empirical defense techniques are proposed, followed by new adaptive attacks breaking these defenses [2,50]. Therefore, significant efforts have been dedicated to developing methods that are certifiably robust [16,42,52] which provide provable robustness guarantees. Most promising among these certified defenses are randomized smoothing (RS) based certified defenses [8,31,32] which are scalable to deep neural networks (DNNs) and high-dimensional datasets.…”
Section: Introductionmentioning
confidence: 99%
“…for instance, find an upper bound for the adversarial loss by considering the polytope generated by adversarial examples with bounded norm and minimizing the loss over a convex polytope that contains it. An upper bound on the adversarial loss is also computed in Raghunathan et al (2018a) by solving instead a semidefinite program. Other more scalable and effective methods based on minimizing an upper bound of the adversarial loss have been introduced Balunovic and Vechev, 2019;Dvijotham et al, 2018a;Zhang et al, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Since the direction of the gradient is then not precise, the optimization process may be problematic. Even though a couple of these approaches do provide an upper bound with closed form solution (Raghunathan et al, 2018a;, they only work for specific cases (Raghunathan et al (2018a) only works for neural networks with two layers, and the upper bound in is often loose and it only applies when the perturbations are in the L ∞ norm-bounded ball). Given this state of affairs, we believe we need a new, principled approach to optimize an upper bound of the adversarial loss.…”
Section: Introductionmentioning
confidence: 99%
“…Provable defenses which are theoretically proven to counter adversarial attacks with a certain accuracy, depending on the attack class. Semidefinite programming-based defenses [15,16] outputs an optimizable certificate that encourages robustness against all attacks. The defense proposed in [17], considers a convex outer approximation that covers all the perturbations that can possibly be generated from that space and minimizes the worst-case loss over this region through linear programming.…”
mentioning
confidence: 99%