We propose trace abstraction modulo probability, a proof technique for verifying high-probability accuracy guarantees of probabilistic programs. Our proofs overapproximate the set of program traces using failure automata, nite-state automata that upper bound the probability of failing to satisfy a target speci cation.We automate proof construction by reducing probabilistic reasoning to logical reasoning: we use program synthesis methods to select axioms for sampling instructions, and then apply Craig interpolation to prove that traces fail the target speci cation with only a small probability. Our method handles programs with unknown inputs, parameterized distributions, in nite state spaces, and parameterized speci cations. We evaluate our technique on a range of randomized algorithms drawn from the di erential privacy literature and beyond. To our knowledge, our approach is the rst to automatically establish accuracy properties of these algorithms.common weaknesses: they are mostly restricted to closed programs with xed inputs and nite state spaces, and support for properties with symbolic parameters remains limited.In this paper, we start from established automated veri cation techniques for non-probabilistic programs and extend them to the probabilistic se ing. Our logic-based approach yields several bene ts. By reasoning symbolically instead of numerically, we can (i) directly establish properties for all inputs rather than requiring xed inputs, (ii) handle programs that sample from distributions with unknown parameters, possibly over in nite ranges, and (iii) prove parametric accuracy properties, making it possible to automatically establish tradeo s between accuracy and failure probabilities, and capture the dependence on other input parameters.
Inspired by the proliferation of data-analysis tasks, recent research in program synthesis has had a strong focus on enabling users to specify data-analysis programs through intuitive specifications, like examples and natural language. However, with the ever-increasing threat to privacy through data analysis, we believe it is imperative to reimagine program synthesis technology in the presence of formal privacy constraints. In this paper, we study the problem of automatically synthesizing randomized, differentially private programs, where the user can provide the synthesizer with a constraint on the privacy of the desired algorithm. We base our technique on a linear dependent type system that can track the resources consumed by a program, and hence its privacy cost. We develop a novel type-directed synthesis algorithm that constructs randomized differentially private programs. We apply our technique to the problems of synthesizing database-like queries as well as recursive differential privacy mechanisms from the literature. CCS Concepts: • Security and privacy → Privacy-preserving protocols; • Theory of computation → Type theory; Theory of database privacy and security; • Software and its engineering → Programming by example.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.