Abstract:Meta-programs form a class of logic programs of major importance. In the past it has proved very difficult to provide a declarative semantics for meta-programs in languages such as Prolog. These problems have been identified as largely being caused by the fact that Prolog fails to handle the necessary representation requirements adequately. The ground representation is receiving increasing recognition as being necessary to adequately represent meta-programs. However, the expense it incurs has largely precluded… Show more
The paper develops a self-tuning resource aware partial evaluation technique for Prolog programs, which derives its own control strategies tuned for the underlying computer architecture and Prolog compiler using a genetic algorithm approach. The algorithm is based on mutating the annotations of offline partial evaluation. Using a set of representative sample queries it decides upon the fitness of annotations, controlling the trade-off between code explosion, speedup gained and specialisation time. The user can specify the importance of each of these factors in determining the quality of the produced code, tailouring the specialisation to the particular problem at hand. We present experimental results for our implemented technique on a series of benchmarks. The results are compared against the aggressive termination based binding-time analysis and optimised using different measures for the quality of code. We also show that our technique avoids some classical pitfalls of partial evaluation.
The paper develops a self-tuning resource aware partial evaluation technique for Prolog programs, which derives its own control strategies tuned for the underlying computer architecture and Prolog compiler using a genetic algorithm approach. The algorithm is based on mutating the annotations of offline partial evaluation. Using a set of representative sample queries it decides upon the fitness of annotations, controlling the trade-off between code explosion, speedup gained and specialisation time. The user can specify the importance of each of these factors in determining the quality of the produced code, tailouring the specialisation to the particular problem at hand. We present experimental results for our implemented technique on a series of benchmarks. The results are compared against the aggressive termination based binding-time analysis and optimised using different measures for the quality of code. We also show that our technique avoids some classical pitfalls of partial evaluation.
“…The increase from one to two inboth/3 clauses is arguably normal as calls to member /2 have been unfolded and this predicate is defined by two clauses. Some partial evaluators, for instance, sage (Gurr 1994b, Gurr 1994a do not prevent such work duplication. This can result in arbitrarily big slowdowns, much higher than those encountered in Example 6 (see, e.g., (Bowers and Gurr 1995)).…”
Section: Examplementioning
confidence: 99%
“…Indeed, achieving effective self-application was one of the initial motivations for investigating offline control techniques (Jones, Sestoft and Søndergaard 1989). Selfapplication was first achieved in the logic programming context in (Mogensen and Bondorf 1992) for a subset of Prolog and later in (Gurr 1994b, Gurr 1994a) for full Gödel. Self-application enables a partial evaluator to generate so-called "compilers" from interpreters using the second Futamura projection and a compiler generator (cogen) using the third Futamura projection (see, e.g., (Jones et al 1993)).…”
Program specialisation aims at improving the overall performance of programs by performing
source to source transformations. A common approach within functional and logic programming,
known respectively as partial evaluation and partial deduction, is to exploit partial
knowledge about the input. It is achieved through a well-automated application of parts of the
Burstall-Darlington unfold/fold transformation framework. The main challenge in developing
systems is to design automatic control that ensures correctness, efficiency, and termination.
This survey and tutorial presents the main developments in controlling partial deduction over
the past 10 years and analyses their respective merits and shortcomings. It ends with an
assessment of current achievements and sketches some remaining research challenges.
“…Furthermore, as demonstrated by Gallagher in [29] and by the experiments in this paper, partial evaluation can in this way sometimes completely remove the overhead of the ground representation. Performing a similar feat on a meta-interpreter using the full ground representation with explicit unification is much harder and has, to the best of our knowledge, not been accomplished yet (for some promising attempts see [34,33,9] or [56]).…”
Section: The Ground Non-ground and Mixed Representationsmentioning
Integrity constraints are useful for the specification of deductive databases, as well as for inductive and abductive logic programs. Verifying integrity constraints upon updates is a major efficiency bottleneck and specialised methods have been developed to speedup this task. They can, however, still incur a considerable overhead. In this paper we propose a solution to this problem by using partial evaluation to pre-compile the integrity checking for certain update patterns. The idea being, that a lot of the integrity checking can already be performed given an update pattern without knowing the actual, concrete update. In order to achieve the pre-compilation, we write the specialised integrity checking as a meta-interpreter in logic programming. This meta-interpreter incorporates the knowledge that the integrity constraints were not violated prior to a given update. By partially evaluating this meta-interpreter for certain transaction patterns, using a partial evaluation technique presented in earlier work, we are able to automatically obtain very efficient specialised update procedures, executing faster than other integrity checking procedures proposed in the literature.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.