2016
DOI: 10.1145/2914770.2837664
|View full text |Cite
|
Sign up to set email alerts
|

Learning invariants using decision trees and implication counterexamples

Abstract: Inductive invariants can be robustly synthesized using a learning model where the teacher is a program verifier who instructs the learner through concrete program configurations, classified as positive, negative, and implications. We propose the first learning algorithms in this model with implication counterexamples that are based on machine learning techniques. In particular, we extend classical decision-tree learning algorithms in machine learning to handle implication samples, building new scalable ways to… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
51
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 60 publications
(51 citation statements)
references
References 58 publications
0
51
0
Order By: Relevance
“…Data pick up is itself judged by measure called entropy , which utilized to measures the impurity of set of training objects. For a collection of data set S, entropy formula is given in equation 1, formulated by Claude Shannon , also used by [28,[35][36][37][38]. Entropy helps decision tree to determine how informative a node is.…”
Section: Algorithm 2: Id3 Working Algorithm Of Decision Treementioning
confidence: 99%
“…Data pick up is itself judged by measure called entropy , which utilized to measures the impurity of set of training objects. For a collection of data set S, entropy formula is given in equation 1, formulated by Claude Shannon , also used by [28,[35][36][37][38]. Entropy helps decision tree to determine how informative a node is.…”
Section: Algorithm 2: Id3 Working Algorithm Of Decision Treementioning
confidence: 99%
“…We evaluated FreqHorn-2 on various safe and buggy programs taken from SVCOMP 3 and literature (e.g., [9,15]). Since most of benchmarks, proposed by [9], appeared to be solvable during the bootstrapping of FreqHorn-2 (more details in Sect.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, k-induction was benefitted from lemmas obtained from PDR [21]. A promising idea to exploit the data from traces [12,15] while creating and manipulating the candidates for invariants could also be used in our syntax-guided approach: at least we could add more constants to the grammar. However we are currently unaware of a strategy to find meaningful constants and to avoid over-population of the grammar by too many constants.…”
Section: Related Workmentioning
confidence: 99%
“…During construction of decision trees, the predicates for the inner nodes are chosen based on a supplied metric, which heuristicly attempts to select predicates leading into small trees. The entropy-based information gain is the most prevalent metric to construct decision trees, in machine learning [40,46] as well as in formal meth-ods [3,9,27,42]. Algorithm 2 presents a split procedure utilizing information gain, supplemented with a stand-in metric proposed in [11].…”
Section: Splitting Criterion For Small Decision Trees With Classifiersmentioning
confidence: 99%