2021
DOI: 10.1561/2400000035
|View full text |Cite
|
Sign up to set email alerts
|

Algorithms for Verifying Deep Neural Networks

Abstract: Deep neural networks are widely used for nonlinear function approximation with applications ranging from computer vision to control. Although these networks involve the composition of simple arithmetic operations, it can be very challenging to verify whether a particular network satisfies certain input-output properties. This article surveys methods that have emerged recently for soundly verifying such properties. These methods borrow insights from reachability analysis, optimization, and search. We discuss fu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
159
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 178 publications
(168 citation statements)
references
References 19 publications
0
159
0
Order By: Relevance
“…As the interest in neural networks has surged, so has research in their verification. We review some notable results here, although recent surveys may provide more a thorough overview [15,28]. Verification approaches for NNs can broadly be characterized into geometric techniques, SMT methods, and MILP approaches.…”
Section: Related Workmentioning
confidence: 99%
“…As the interest in neural networks has surged, so has research in their verification. We review some notable results here, although recent surveys may provide more a thorough overview [15,28]. Verification approaches for NNs can broadly be characterized into geometric techniques, SMT methods, and MILP approaches.…”
Section: Related Workmentioning
confidence: 99%
“…We compare the performance of RsO on robustness verification using the state-of-theart methods, Wong and Kolter (2018), Singh et al (2018), Singh et al (2019b), Singh et al (2019c and es (exact approach) (Xiang et al 2017), which computes the exact reachable set (implementation (Liu et al 2019)). RsU is compared with the success rate of FGSM attacks (Goodfellow et al 2015;Szegedy et al 2014) and PGD attacks (Madry et al 2018).…”
Section: Applications and Experimentsmentioning
confidence: 99%
“…A line of work relevant to ours is formal verification of neural networks (Liu et al 2019), which provides robustness guarantee that any input perturbation within a neighborhood of the natural example cannot fool the classifier's top-1 prediction. In (Katz et al 2017;Ehlers 2017;Bunel et al 2018;Dutta et al 2017), satisfiability modulo theory (SMT) or mixed-integer programming (MIP) based methods provided exact robustness certificate with respect to the input perturbation strength.…”
Section: Related Workmentioning
confidence: 99%
“…Although deep neural networks (DNNs) have achieved human-level performance in many learning tasks, intricate adversarial examples have been shown to exist in DNNs (Szegedy et al 2014;Moosavi-Dezfooli, Fawzi, and Frossard 2016;Chen et al 2018;Zhao et al 2019a). An everincreasing amount of research effort has been devoted to implementing adversarial attacks in various applications (Athalye, Carlini, and Wagner 2018;Carlini and Wagner 2017;Papernot et al 2016a;Song et al 2018;Carlini and Wagner 2018), developing defense methods ranging from heuristic methods to provable defenses (Papernot et al 2016b;Liu et al 2018;Madry et al 2018;Kolter and Wong 2018;Liu et al 2019), as well as efficient verification of neural networks against adversarial examples (Hein and Andriushchenko 2017;Weng et al 2018b;2018a;Gehr et al 2018;Boopathy et al 2019) and random noises in image classification as well as in natural language processing (Ko et al 2019) and reinforcement learning (Wang, Weng, and Daniel 2019). Different from above work on studying the problem of robustness against input perturbations, this work aims to evaluate the sensitivity of DNNs to weight perturbations.…”
Section: Introductionmentioning
confidence: 99%