Proceedings of the 41st ACM SIGPLAN Conference on Programming Language Design and Implementation 2020
DOI: 10.1145/3385412.3385986
|View full text |Cite
|
Sign up to set email alerts
|

Learning nonlinear loop invariants with gated continuous logic networks

Abstract: Verifying real-world programs often requires inferring loop invariants with nonlinear constraints. This is especially true in programs that perform many numerical operations, such as control systems for avionics or industrial plants. Recently, data-driven methods for loop invariant inference have shown promise, especially on linear loop invariants. However, applying data-driven inference to nonlinear loop invariants is challenging due to the large numbers of and large magnitudes of high-order terms, the potent… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(11 citation statements)
references
References 34 publications
(35 reference statements)
0
11
0
Order By: Relevance
“…We manually derived affine invariants for the input PTSs. Alternatively, invariant generation, which is an orthogonal problem to ours, can be automated by approaches such as [9,26,44,50]. Similarly, we proved almost-sure termination by manually constructing ranking supermartingales [6,11].…”
Section: Resultsmentioning
confidence: 75%
“…We manually derived affine invariants for the input PTSs. Alternatively, invariant generation, which is an orthogonal problem to ours, can be automated by approaches such as [9,26,44,50]. Similarly, we proved almost-sure termination by manually constructing ranking supermartingales [6,11].…”
Section: Resultsmentioning
confidence: 75%
“…Then, we re-train the classifier subject to the constraint that 𝑅 1 ≤ 𝑅 2 . To enforce this constraint, we smooth the discrete classifier using Continuous Logic Networks (CLN) [71,95], and then use projected gradient descent with the constraint to train the classifier. Gradient-guided optimization ensures that this counterexample (𝑥, 𝑥 ′ ) will no longer violate the property and tries to achieve the highest accuracy subject to that constraint.…”
Section: Monotonicity For Cryptojacking Classifiermentioning
confidence: 99%
“…Otherwise, we construct a counterexample according to solutions for the boolean variables. For the trainer, we use PyTorch to implement the smoothed classifier as Continuous Logic Networks [71,95]. Then, we use quadratic programming to implement projected gradient descent, where we minimize the square of ℓ 2 norm between the updated weights and the projected weights subject to a set of training constraints.…”
Section: Training Algorithm Implementationmentioning
confidence: 99%
See 2 more Smart Citations