2019 IEEE/CVF International Conference on Computer Vision (ICCV) 2019
DOI: 10.1109/iccv.2019.00494
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Verified Training for Provably Robust Image Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
117
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 98 publications
(131 citation statements)
references
References 6 publications
1
117
0
Order By: Relevance
“…These methods offer exactness guarantees but are based on solving NP-hard optimization problems, which can make them intractable even for small networks. Incomplete methods can be divided into bound propagation approaches [Gowal et al 2019;Müller et al 2020;Singh et al 2018Singh et al , 2019b] and those that generate polynomially-solvable optimization problems [Bunel et al 2020a;Dathathri et al 2020;Lyu et al 2020;Raghunathan et al 2018;Singh et al 2019a;Xiang et al 2018] such as linear programming (LP) or semidefinite programming (SDP) optimization problems. Compared to deterministic certification methods, randomized smoothing [Cohen et al 2019;Lecuyer et al 2018;Salman et al 2019a] is a defence method providing only probabilistic guarantees and incurring significant runtime costs at inference time, with the generalization to arbitrary safety properties still being an open problem.…”
Section: Effectiveness Of Sblm and Pddm For Convex Hull Computationsmentioning
confidence: 99%
“…These methods offer exactness guarantees but are based on solving NP-hard optimization problems, which can make them intractable even for small networks. Incomplete methods can be divided into bound propagation approaches [Gowal et al 2019;Müller et al 2020;Singh et al 2018Singh et al , 2019b] and those that generate polynomially-solvable optimization problems [Bunel et al 2020a;Dathathri et al 2020;Lyu et al 2020;Raghunathan et al 2018;Singh et al 2019a;Xiang et al 2018] such as linear programming (LP) or semidefinite programming (SDP) optimization problems. Compared to deterministic certification methods, randomized smoothing [Cohen et al 2019;Lecuyer et al 2018;Salman et al 2019a] is a defence method providing only probabilistic guarantees and incurring significant runtime costs at inference time, with the generalization to arbitrary safety properties still being an open problem.…”
Section: Effectiveness Of Sblm and Pddm For Convex Hull Computationsmentioning
confidence: 99%
“…Apart from minimizing the worst case loss, approaches which minimize the upper bound on worst case loss inclu Wong et al, 2018 ; Tjeng et al (2018) ; Gowal et al (2019) . Another breed of approaches use a modified loss function which considers surrogate adversarial loss as an added regularization, where the surrogate is cross entropy ( Zhang et al, 2019b ) (TRADES), maximum margin cross entropy ( Ding et al, 2019 ) (MMA) and KL divergence ( Wang et al, 2019 ) (MART) between adversarial sample predictions and natural sample predictions.…”
Section: Prior Workmentioning
confidence: 99%
“…Wong and Kolter (2018) use a convex outer adversarial polytope as an upper bound for worst-case loss in robust training; here the network is trained by generating adversarial as well as few non-adversarial examples in the convex polytope of the attack via a linear program. Along the same vein include a mixed-integer programming based certified training for piece-wise linear neural networks ( Tjeng et al, 2018 ) and integer bound propagation ( Gowal et al, 2019 ).…”
Section: Introductionmentioning
confidence: 99%
“…These baselines, defined below in details, are randomized smoothing [5] and adversarial training [4]. Another line of successful works rely on convex relaxations of the adversarial problem [7,6]. Because these methods tend to be heavily model-dependent, we did not find it necessary to adapt them in this exploratory work.…”
Section: Related Workmentioning
confidence: 99%
“…obfuscation mechanisms [3]. On the other hand, defenses that stood the test of time [4] or offer robustness certificates [5,6,7] have in common to address the issue formally and defend against any possible adversarial patterns, regardless of which patterns actually pose a threat.…”
Section: Introductionmentioning
confidence: 99%