This paper presents two techniques for improving garbage collection performance: generational stack collection and profile-driven pretenuring. The first is applicable to stack-based implementations of functional languages while the second is useful for any generational collector. We have implemented both techniques in a generational collector used by the TIL compiler (Tarditi, Morrisett, Cheng, Stone, Harper, and Lee 1996), and have observed decreases in garbage collection times of as much as 70% and 30%, respectively.Functional languages encourage the use of recursion which can lead to a long chain of activation records. When a collection occurs, these activation records must be scanned for roots. We show that scanning many activation records can take so long as to become the dominant cost of garbage collection. However, most deep stacks unwind very infrequently, so most of the root information obtained from the stack remains unchanged across successive garbage collections. Generational stack collection greatly reduces the stack scan cost by reusing information from previous scans.Generational techniques have been successful in reducing the cost of garbage collection (Ungar 1984). Various complex heap arrangements and tenuring policies have been proposed to increase the effectiveness of generational techniques by reducing the cost and frequency of scanning and copying. In contrast, we show that by using profile information to make lifetime predictions, pretenuring can avoid copying data altogether. In essence, this technique uses a refinement of the generational hypothesis (most data die young) with a locality principle concerning the age of data: most allocations sites produce data that immediately dies, while a few allocation sites consistently produce data that survives many collections.
We describe the design and implementation of a compiler that automatically translates ordinary programs written in a subset of ML into code that generates native code at run time. Run-time code generation canmake useof values and invariants that cannot be exploited at compile time, yielding code that is often superior to statically optimal code. But the cost of optimizing and generating code at run time can be prohibitive.Redemonstrate howcompile-time specialization can reduce the cost of run-time code generation by an order of magnitude without greatly tiecting code quality. Several benchmark programs are examined, which exhibit an average cost of only six cycles per instruction generated at run time.
The Ergo Support System (ESS) is an engineering framework for experimentation and prototyping to support the application of formal methods to program development, ranging from program analysis and derivation to proof-theoretic approaches. The ESS is a growing suite of tools that are linked together by means of a set of abstract interfaces. The principal engineering challenge is the design of abstract interfaces that are semantically rich and yet flexible enough to permit experimentation with a wide variety of formally-based program and proof development paradigms and associated languages. As part of the design of ESS, several abstract interface designs have been developed that provide for more effective component integration while preserving flexibility and the potential for scaling. A benefit of the open architecture approach of ESS is the ability to mix formal and informal approaches in the same environment architecture. The ESS has already been applied in a number of formal methods experiments.
In recent years, advances in machine learning and related fields have led to significant advances in a range of user-interface technologies, including audio processing, speech recognition, and natural language processing. These advances in turn have enabled speech-based digital assistants and speech-to-speech translation systems to become practical to deploy on a large scale. In essence, machines are becoming capable of hearing what we are saying. But will they understand what we want them to do when we talk to them? What are the prospects for getting useful work done in essence, by synthesizing programs -- through the act of having a conversation with a computer? In this lecture, I will speculate on the central role that programming-language design and program synthesis may have in this possible -- and I will argue, likely -- future of computing, one in which every user writes programs, every day, by conversing with a computing system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.