2017 32nd IEEE/ACM International Conference on Automated Software Engineering (ASE) 2017
DOI: 10.1109/ase.2017.8115631
| View full text |Cite
|
Sign up to set email alerts
|

Abstract: An approach to CEGAR-based model checking which has proved to be successful on large models employs Craig interpolation to efficiently construct parsimonious abstractions. Following this design, we introduce new applications, universal safety interpolant and existential error interpolant, of Craig interpolation that can systematically reduce the program state space to be explored for safety verification. Whenever the universal safety interpolant is implied by the current path, all paths emanating from that loc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2018
2018
2018
2018

Publication Types

Select...
1
1

Relationship

1
1

Authors

Journals

citations
Cited by 8 publications
(2 citation statements)
references
References 34 publications
(43 reference statements)
0
2
0
Order By: Relevance
“…If Π is not spurious, we conclude that the program is unsafe. Otherwise, by update S-Interp, update E-Interp, and update R-Interp [5], the S-Interp, E-Interp, and R-Interp of locations involved in Π are updated, respectively. Subsequently, we reversely track the current path for other possibilities and treat a new current state s : (l, c, p) in the same way until the program is reported as unsafe or there are no more states to be explored.…”
Section: Verification Approachmentioning
confidence: 99%
“…If Π is not spurious, we conclude that the program is unsafe. Otherwise, by update S-Interp, update E-Interp, and update R-Interp [5], the S-Interp, E-Interp, and R-Interp of locations involved in Π are updated, respectively. Subsequently, we reversely track the current path for other possibilities and treat a new current state s : (l, c, p) in the same way until the program is reported as unsafe or there are no more states to be explored.…”
Section: Verification Approachmentioning
confidence: 99%
“…In order to learn both the set of code feature extractors and a model of configuration correctness, our system uses a refinement loop as shown in Fig 7, which is inspired by counterexample guided abstraction refinement (CEGAR) (Clarke et al 2000). The CEGAR loop has been used for various model checking tasks (Ball et al 2011;Beyer and Löwe 2013;Tian et al 2017). A similar loop is also widely used in program synthesis (Jha and Seshia 2014;Solar Lezama 2008).…”
Section: Learningmentioning
confidence: 99%