Proceedings of the 43rd Annual ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages 2016
DOI: 10.1145/2837614.2837664
|View full text |Cite
|
Sign up to set email alerts
|

Learning invariants using decision trees and implication counterexamples

Abstract: Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counterexamples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
135
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
3

Relationship

1
7

Authors

Journals

citations
Cited by 128 publications
(135 citation statements)
references
References 36 publications
0
135
0
Order By: Relevance
“…Algorithms performing only inductiveness checks can in fact be very sophisticated, traversing the domain of candidates in clever ways. This approach was formulated in the ICE learning framework for learning inductive invariants [Garg et al 2014[Garg et al , 2016 (later extended to general Constrained-Horn Clauses [Ezudheen et al 2018]), in which algorithms present new candidates based on positive, negative, and implication examples returned by a "teacher" in response to incorrect candidate invariants. 4 The main point is that such algorithms do not perform queries other than inductiveness, and choose the next candidate invariant based solely on the counterexamples to induction showing the previous candidates were unsuitable.…”
Section: Inference Using Rich Queriesmentioning
confidence: 99%
See 2 more Smart Citations
“…Algorithms performing only inductiveness checks can in fact be very sophisticated, traversing the domain of candidates in clever ways. This approach was formulated in the ICE learning framework for learning inductive invariants [Garg et al 2014[Garg et al , 2016 (later extended to general Constrained-Horn Clauses [Ezudheen et al 2018]), in which algorithms present new candidates based on positive, negative, and implication examples returned by a "teacher" in response to incorrect candidate invariants. 4 The main point is that such algorithms do not perform queries other than inductiveness, and choose the next candidate invariant based solely on the counterexamples to induction showing the previous candidates were unsuitable.…”
Section: Inference Using Rich Queriesmentioning
confidence: 99%
“…ICE. The ICE framework [Garg et al 2014[Garg et al , 2016 (later extended to general Constrained-Horn Clauses [Ezudheen et al 2018]), is a learning framework for inferring invariants from positive, negative and implication counterexamples. We now review the framework using the original terminology and notation; later in the paper we will use a related formulation that emphasizes the choice of candidates (in §7.1).…”
Section: Invariant Inference Algorithmsmentioning
confidence: 99%
See 1 more Smart Citation
“…A new class of black-box techniques based on learning has emerged in recent years to synthesize inductive invariants [Garg et al 2014[Garg et al , 2016. In this technique, there are two distinct agents, the Learner and the Teacher.…”
Section: Introductionmentioning
confidence: 99%
“…In the short time that the formalism has been in public circulation, it has already performed well in its goal of facilitating research in synthesis while providing a basis for objective comparison of different algorithms. For example, the competition has provided important insights into the relative merits of different algorithms [3,2,7] which have been exploited to help develop and evaluate new algorithms [11,15,24,26,18,4,12]. Beyond synthesizer developers, there is a growing community of users that is coalescing around the formalism.…”
Section: Introductionmentioning
confidence: 99%