"Some half dozen persons have written technically on combinatory logic, and most of these, including ourselves, have published something erroneous. Since some of our fellow sinners are among the most careful and competent logicians on the contemporary scene, we regard this as evidence that the subject is refractory. Thus fullness of exposition is necessory for accurary; and excessive condensation would be false economy here, even more than it is ordinarily."
We present a static analysis for determining the execution costs of lazily evaluated functional languages, such as Haskell. Time-and space-behaviour of lazy functional languages can be hard to predict, creating a significant barrier to their broader acceptance. This paper applies a type-based analysis employing amortisation and cost effects to statically determine upper bounds on evaluation costs. While amortisation performs well with finite recursive data, we significantly improve the precision of our analysis for co-recursive programs (i.e. dealing with potentially infinite data structures) by tracking self-references. Combining these two approaches gives a fully automatic static analysis for both recursive and co-recursive definitions. The analysis is formally proven correct against an operational semantic that features an exchangeable parametric cost-model. An arbitrary measure can be assigned to all syntactic constructs, allowing to bound, for example, evaluation steps, applications, allocations, etc. Moreover, automatic inference only relies on first-order unification and standard linear programming solving. Our publicly available implementation demonstrates the practicability of our technique on editable non-trivial examples.
We describe a new automatic static analysis for determining upper-bound functions on the use of quantitative resources for strict, higher-order, polymorphic, recursive programs dealing with possibly-aliased data. Our analysis is a variant of Tarjan's manual amortised cost analysis technique. We use a type-based approach, exploiting linearity to allow inference, and place a new emphasis on the number of references to a data object. The bounds we infer depend on the sizes of the various inputs to a program. They thus expose the impact of specific inputs on the overall cost behaviour.The key novel aspect of our work is that it deals directly with polymorphic higher-order functions without requiring source-level transformations that could alter resource usage. We thus obtain safe and accurate compile-time bounds. Our work is generic in that it deals with a variety of quantitative resources. We illustrate our approach with reference to dynamic memory allocations/deallocations, stack usage, and worst-case execution time, using metrics taken from a real implementation on a simple micro-controller platform that is used in safety-critical automotive applications.
This paper presents a new programming methodology for introducing and tuning parallelism in Erlang programs, using source-level code refactoring from sequential source programs to parallel programs written using our skeleton library, Skel. High-level cost models allow us to predict with reasonable accuracy the parallel performance of the refactored program, enabling programmers to make informed decisions about which refactorings to apply. Using our approach, we demonstrate easily obtainable, significant and scalable speedups of up to 21 on a 24-core machine over the sequential cod
Abstract. This paper presents a novel type-and-effect analysis for predicting upper-bounds on memory allocation costs for co-recursive definitions in a simple lazily-evaluated functional language. We show the soundness of this system against an instrumented variant of Launchbury's semantics for lazy evaluation which serves as a formal cost model. Our soundness proof requires an intermediate semantics employing indirections. Our proof of correspondence between these semantics that we provide is thus a crucial part of this work. The analysis has been implemented as an automatic inference system. We demonstrate its effectiveness using several example programs that previously could not be automatically analysed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.