Abstract-Bottleneck of the validation and evaluation of analysis and verification tools for distributed systems is the shortage of benchmark problems. Specifically designed benchmark problems are typically artificial, rare, and small, and it is difficult to guarantee challenging properties of realistic benchmarks. This paper shows how to systematically construct arbitrarily complex Petri Nets with guaranteed safety properties. Key to our construction is a top-down parallel decomposition based on lightweight assumption commitment specifications. We will illustrate how a specific strategy for design choices, which may well be automated, leads to benchmarks that grow exponentially with the number of its parallel components, and that are very difficult to verify. In particular, we will report numbers from a systematic sequence of concrete corresponding verification attempts using today's leading verification technology.
This paper covers the Rigorous Examination of Reactive Systems (RERS) Challenge 2019. For the first time in the history of RERS, the challenge features industrial tracks where benchmark programs that participants need to analyze are synthesized from real-world models. These new tracks comprise LTL, CTL, and Reachability properties. In addition, we have further improved our benchmark generation infrastructure for parallel programs towards a full automation. RERS 2019 is part of TOOLympics, an event that hosts several popular challenges and competitions. In this paper, we highlight the newly added industrial tracks and our changes in response to the discussions at and results of the last RERS Challenge in Cyprus.
The 5th challenge of Rigorous Examination of Reactive Systems (RERS 2016) once again provided generated and tailored benchmarks suited for comparing the effectiveness of automatic software verifiers. RERS is the only software verification challenge that features problems with linear temporal logic (LTL) properties in larger sizes that are available in different programming languages. This paper describes the revised rules and the refined profile of the challenge, which lowers the entry hurdle for new participants. The challenge format with its three tracks, their benchmarks, and the related LTL and reachability properties are explained. Special emphasis is put on changes that were implemented in RERS -compared to former RERS challenges. The competition comprised 18 sequential and 20 parallel benchmarks. The 20 benchmarks from the new parallel track feature LTL properties and a compact representation as labeled transition systems and Promela code.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.