Proceedings of the 16th ACM SIGPLAN-SIGACT Symposium on Principles of Programming Languages - POPL '89 1989
DOI: 10.1145/75277.75305
|View full text |Cite
|
Sign up to set email alerts
|

Incremental computation via function caching

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
99
0

Year Published

1989
1989
2015
2015

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 149 publications
(99 citation statements)
references
References 12 publications
0
99
0
Order By: Relevance
“…All of these designs focus on strict languages with call-by-value (CBV) functions that eagerly evaluate function arguments 7 and none of them supported efficient reordering. Approaches based on pure memoization (function caching) alone [16,14] allow for incrementality with reordering; since they lack the fine-grained dependence tracking of modifiable references, they can only provide coarse-grained reuse and are inefficient for deeply-nested changes (e.g., changing the last element of a list). Previous work introduced a cost semantics for self-adjusting computation with updatable references and monotonic reuse, and showed analogous correctness properties of change-propagation and compilation [12].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…All of these designs focus on strict languages with call-by-value (CBV) functions that eagerly evaluate function arguments 7 and none of them supported efficient reordering. Approaches based on pure memoization (function caching) alone [16,14] allow for incrementality with reordering; since they lack the fine-grained dependence tracking of modifiable references, they can only provide coarse-grained reuse and are inefficient for deeply-nested changes (e.g., changing the last element of a list). Previous work introduced a cost semantics for self-adjusting computation with updatable references and monotonic reuse, and showed analogous correctness properties of change-propagation and compilation [12].…”
Section: Related Workmentioning
confidence: 99%
“…Programming languages for incremental computation provide compile-and run-time support to (semi-)automatically derive incremental programs from static programs [8,16,17]. In particular, self-adjusting computation (SAC) is a languagebased approach that provides a general-purpose change-propagation mechanism to update the output [1].…”
Section: Introductionmentioning
confidence: 99%
“…The general concept of memoization in computer programs has been around for a long time where the idea is to speed up the computer programs by avoiding repetitive/redundant function calls [9] [10] [11]. Nevertheless, in practice, the general notion of memoization has not gained success due to the following reasons: 1) the proposed techniques usually require detailed profiling information about the runtime behaviour of the program which makes it difficult to implement [12], 2) the techniques are usually generic methods which do not concentrate on any particular class of input data or algorithms [13], and 3) the memoization engine used by these techniques are based on non-fuzzy comparison where two input data is considered equal (in which case one's result can be reused for another) if they are identical.…”
Section: Window Memoizationmentioning
confidence: 99%
“…The field of incremental computation aims at deriving software that can respond automatically and efficiently to changing data. Earlier work investigated techniques based on static dependency graphs [14,38] and memoization [32,21]. More recent work on self-adjusting computation introduced dynamic dependency graphs [3] and a way to integrate them with a form of memoization [2,4].…”
Section: Introductionmentioning
confidence: 99%