Most studies of optimization techniques for higher level languages have focused on improving execution time of generated programs, often at the expense of increased storage. When storage optimization has been addressed, it is usually in conjunction with time optimization, such as in instruction-reducing code transformations. In the Bliss Compiler (WJWHG75), a storage-optimizing compiler, transformations that reduce register temporary storage are also performed, but automatic overlay of program variables is not addressed. The rising popularity of mini-computers and micro-processors suggests that the time has come to examine the problem of automatic storage optimization in its totality. Because lack of space has always been a problem in the small systems environment, the proliferation of small machines implies the growing importance of the problem. Although the decreasing cost of memory may mitigate this trend, a variant of Murphy's Law ensures that program size will always increase faster than the available storage. In other words, programmers always write programs that don't fit, and, as time goes on, more of them will be doing it.
The Experimental Compiling System (ECS) described here represents a new compiler construction methodology that uses a compiler base which can be augmented to create a compiler for any one of a wide class of source languages. The resulting compiler permits the user to select code quality ranging from highly optimized to interpretive. The investigation is concentrating on eo.vy expression and efficient implementation of language semantics; syntax analysis is ignored.The other unique feature of the schema is that all operations are references to procedures which implicitly define Copyright 1980 by International Business Machines Corporation. Copying is permitted without payment of royalty provided that (1) each reproductioo is done without alteration and (2) the Journal reference and IBM copyright notice are included on the first page. The title and abstract may be used without further permission in computer-based and other information-service systems. Permission to republish other excerpts should be obtained from the Editor. 695 IBM
RetrospectiveThis paper, based on the author's 1979 thesis with the same title, addresses the problem of storage optimization. As the paper points out optimizing compilers focus on improving execution times while ignoring or even worsening the problem of storage use. The primary motivation was to automatically organize data so as to "optimize" the use of active storage during execution. Designers of optimizing compilers for procedural languages assumed that compilers would optimize execution times while users, language protocols, and run-time systems determined the storage layouts. (This view of the compiler's role in forming the execution stream and organizing storage persists today.)The Fabri paper proposes a storage-optimizing compiler and develops the algorithms and techniques needed to accomplish storage optimization. (Her thesis has the details.) Some of the algorithms are built on the state-of-the-art program optimization algorithms of the time but many were new or surprising variations of what was known. For example the paper develops an extended graph coloring algorithm to do automatic storage overlays and a framework for storage optimizing code generation. A renaming transformation is developed. Empirical results validate the theory. This is a wonderful paper, beautifully written, clear and insightful. It's nine, easy to read pages provide a nice snapshot of the state of compiler design in the late 1970's and a bold thesis for a new attack on an old problem being ignored by the compiler community. Janet Fabri's untimely death in an auto accident was a great loss in many ways. The field lost the founder of an interesting new research path in optimizing compilers.It is now 2002, and storage optimizing compiler techniques are needed more than ever. The problem isn't quite the same as it was in 1979. The motivation now is less about making good use of active storage than about overcoming the memory wall, especially in the context of concurrency, caches, and distribution. Automatic Storage Optimization is worth revisiting. 20
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.