2016
DOI: 10.1007/978-3-319-41528-4_24
|View full text |Cite
|
Sign up to set email alerts
|

RV-Match: Practical Semantics-Based Program Analysis

Abstract: We present RV-Match, a tool for checking C programs for undefined behavior and other common programmer mistakes. Our tool is extracted from the most complete formal semantics of the C11 language. Previous versions of this tool were used primarily for testing the correctness of the semantics, but we have improved it into a tool for doing practical analysis of real C programs. It beats many similar tools in its ability to catch a broad range of undesirable behaviors. We demonstrate this with comparisons based on… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(13 citation statements)
references
References 9 publications
0
12
0
1
Order By: Relevance
“…This takes 0.15s for 1000 iterations, 1.27s for 10 000, and 12.7s for 100 000, apparently scaling linearly. Guth et al [2016] report a compile-and-run time of 13s (on an 2.4 GHz Intel Xeon) for 10 000 iterations. int ret = 0; for (int i = 0; i <= [...]; i++) { ret++; } More importantly, the time needed to run most of the above test suites is quite reasonable: 22.5s to run our pointer provenance tests and many others (189 in total); 3 minutes to run the GCC Torture tests; and 25 minutes for those Csmith tests.…”
Section: Experimental Validationmentioning
confidence: 99%
See 3 more Smart Citations
“…This takes 0.15s for 1000 iterations, 1.27s for 10 000, and 12.7s for 100 000, apparently scaling linearly. Guth et al [2016] report a compile-and-run time of 13s (on an 2.4 GHz Intel Xeon) for 10 000 iterations. int ret = 0; for (int i = 0; i <= [...]; i++) { ret++; } More importantly, the time needed to run most of the above test suites is quite reasonable: 22.5s to run our pointer provenance tests and many others (189 in total); 3 minutes to run the GCC Torture tests; and 25 minutes for those Csmith tests.…”
Section: Experimental Validationmentioning
confidence: 99%
“…We discuss much of this in detail in [Chisnall et al 2016, ğ10, p66ś83], and [Krebbers 2015, Ch.10] gives a useful survey. Work to formalise aspects of the standards includes Batty et al [2011]; Cook and Subramanian [1994]; Ellison and Roşu [2012]; Gurevich and Huggins [1993]; Guth et al [2016]; Hathhorn et al [2015]; Krebbers and Wiedijk [2012]; Krebbers [2013Krebbers [ , 2014Krebbers [ , 2015; Wiedijk [2013, 2015]; Norrish [1998Norrish [ , 1999; Papaspyrou [1998]. Memory object models include those for CompCert by Leroy et al [2012]; Leroy and Blazy [2008] and Besson et al [2014Besson et al [ , 2015, for CompCertTSO by Ševčík et al [2013], the model used for seL4 verification by Tuch et al [2007], and the model used for VCC by Cohen et al [2009].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…We build here on Cerberus [28,29,31], a web-interface tool that computes the allowed behaviours (interactively or exhaustively) for moderate-sized tests in a substantial fragment of sequential C, incorporating various memory object models (an early version supported Nienhuis's operational model for C11 concurrency [33], but that is no longer integrated). KCC and RV-Match [19,21,22] provide a command-line semantics tool for a substantial fragment of C, again without concurrency. Krebbers gives a Coq semantics for a somewhat smaller fragment [24].…”
Section: The Thread-local Sequential Semanticsmentioning
confidence: 99%