2022
DOI: 10.1145/3498704
|View full text |Cite
|
Sign up to set email alerts
|

PRIMA: general and precise neural network certification via scalable convex hull approximations

Abstract: Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a precise and scalable verifier which can handle different activation functions, realistic network architectures and relevant specifications remains an open and difficult challenge. In this paper, we take a major step forward in addressing this challenge and present a new verification framework, called PRIMA. PRIMA is both (i) general: it handles any non-linear activa… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
14
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 50 publications
(16 citation statements)
references
References 34 publications
0
14
0
Order By: Relevance
“…Comparison with incomplete methods. We extend the comparison to more recent works: incomplete methods that used solvers (Müller et al 2022;Wang et al 2021a;Singh et al 2019b,a;Tjandraatmadja et al 2020). We emphasize that these works are incomplete: they do not provide proof that an image correctly predicted has no adversarial attack (see Appendix A for definitions).…”
Section: F3 Comparisons With Other Methodsmentioning
confidence: 99%
“…Comparison with incomplete methods. We extend the comparison to more recent works: incomplete methods that used solvers (Müller et al 2022;Wang et al 2021a;Singh et al 2019b,a;Tjandraatmadja et al 2020). We emphasize that these works are incomplete: they do not provide proof that an image correctly predicted has no adversarial attack (see Appendix A for definitions).…”
Section: F3 Comparisons With Other Methodsmentioning
confidence: 99%
“…These relaxations are relatively inexpensive yet very effective when adapted for complete verification via branch-and-bound, and are hence at the core of the α-β-CROWN (Xu et al, 2021;Wang et al, 2021) and OVAL frameworks (Bunel et al, 2020a;De Palma et al, 2021c). A number of works have recently focused on devising tighter neural network relaxations (Singh et al, 2019a;Anderson et al, 2020;Tjandraatmadja et al, 2020;Müller et al, 2022). These have been integrated into recent branch-and-bound verifiers (De Palma et al, 2021a;Ferrari et al, 2022) and yield strong results for harder verification properties on medium-sized networks.…”
Section: Related Workmentioning
confidence: 99%
“…While the robustness of neural models has received considerable attention [7][8][9][10][11][12][13][14][15][16][17][18][19][20][21], the challenge of obtaining robustness guarantees for ensembles of tree-based models has only been investigated recently [4,22,23]. However, these initial works only consider numerical features and are based on worst-case approximations, which do not scale well to the difficult p -norm setting.…”
Section: Introductionmentioning
confidence: 99%