The C/C++11 memory model defines the semantics of concurrent memory accesses in C/C++, and in particular supports racy "atomic" accesses at a range of different consistency levels, from very weak consistency ("relaxed") to strong, sequential consistency ("SC"). Unfortunately, as we observe in this paper, the semantics of SC atomic accesses in C/C++11, as well as in all proposed strengthenings of the semantics, is flawed, in that (contrary to previously published results) both suggested compilation schemes to the Power architecture are unsound. We propose a model, called RC11 (for Repaired C11), with a better semantics for SC accesses that restores the soundness of the compilation schemes to Power, maintains the DRF-SC guarantee, and provides stronger, more useful, guarantees to SC fences. In addition, we formally prove, for the first time, the correctness of the proposed stronger compilation schemes to Power that preserve load-to-store ordering and avoid "out-of-thin-air" reads.
There has recently been great progress in proving the correctness of compilers for increasingly realistic languages with increasingly realistic runtime systems. Most work on this problem has focused on proving the correctness of a particular compiler, leaving open the question of how to verify the correctness of assembly code that is hand-optimized or linked together from the output of multiple compilers. This has led Benton and other researchers to propose more abstract, compositional notions of when a low-level program correctly realizes a high-level one. However, the state of the art in so-called "compositional compiler correctness" has only considered relatively simple high-level and low-level languages.In this paper, we propose a novel, extensional, compilerindependent notion of equivalence between high-level programs in an expressive, impure ML-like λ-calculus and low-level programs in an (only slightly) idealized assembly language. We define this equivalence by means of a biorthogonal, step-indexed, Kripke logical relation, which enables us to reason quite flexibly about assembly code that uses local state in a different manner than the high-level code it implements (e.g., self-modifying code). In contrast to prior work, we factor our relation in a symmetric, languagegeneric fashion, which helps to simplify and clarify the formal presentation, and we also show how to account for the presence of a garbage collector. Our approach relies on recent developments in Kripke logical relations for ML-like languages, in particular the idea of possible worlds as state transition systems.
Coinduction is one of the most basic concepts in computer science. It is therefore surprising that the commonly-known lattice-theoretic accounts of the principles underlying coinductive proofs are lacking in two key respects: they do not support compositional reasoning (i.e., breaking proofs into separate pieces that can be developed in isolation), and they do not support incremental reasoning (i.e., developing proofs interactively by starting from the goal and generalizing the coinduction hypothesis repeatedly as necessary). In this paper, we show how to support coinductive proofs that are both compositional and incremental, using a dead simple construction we call the parameterized greatest fixed point. The basic idea is to parameterize the greatest fixed point of interest over the accumulated knowledge of "the proof so far". While this idea has been proposed before, by Winskel in 1989 and by Moss in 2001, neither of the previous accounts suggests its general applicability to improving the state of the art in interactive coinductive proof. In addition to presenting the lattice-theoretic foundations of parameterized coinduction, demonstrating its utility on representative examples, and studying its composition with "up-to" techniques, we also explore its mechanization in proof assistants like Coq and Isabelle. Unlike traditional approaches to mechanizing coinduction (e.g., Coq's cofix), which employ syntactic "guardedness checking", parameterized coinduction offers a semantic account of guardedness. This leads to faster and more robust proof development, as we demonstrate using our new Coq library, Paco.
Probabilistic programs use familiar notation of programming languages to specify probabilistic models. Suppose we are interested in estimating the distribution of the return expression r of a probabilistic program P . We are interested in slicing the probabilistic program P and obtaining a simpler program SLI(P ) which retains only those parts of P that are relevant to estimating r, and elides those parts of P that are not relevant to estimating r. We desire that the SLI transformation be both correct and efficient. By correct, we mean that P and SLI(P ) have identical estimates on r. By efficient, we mean that estimation over SLI(P ) be as fast as possible.We show that the usual notion of program slicing, which traverses control and data dependencies backward from the return expression r, is unsatisfactory for probabilistic programs, since it produces incorrect slices on some programs and sub-optimal ones on others. Our key insight is that in addition to the usual notions of control dependence and data dependence that are used to slice nonprobabilistic programs, a new kind of dependence called observe dependence arises naturally due to observe statements in probabilistic programs.We propose a new definition of SLI(P ) which is both correct and efficient for probabilistic programs, by including observe dependence in addition to control and data dependences for computing slices. We prove correctness mathematically, and we demonstrate efficiency empirically. We show that by applying the SLI transformation as a pre-pass, we can improve the efficiency of probabilistic inference, not only in our own inference tool R2, but also in other systems for performing inference such as Church and Infer.NET.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.