It is well-known that many environment-based abstract machines can be seen as strategies in lambda calculi with explicit substitutions (ES). Recently, graphical syntaxes and linear logic led to the linear substitution calculus (LSC), a new approach to ES that is halfway between small-step calculi and traditional calculi with ES. This paper studies the relationship between the LSC and environment-based abstract machines. While traditional calculi with ES simulate abstract machines, the LSC rather distills them: some transitions are simulated while others vanish, as they map to a notion of structural congruence. The distillation process unveils that abstract machines in fact implement weak linear head reduction, a notion of evaluation having a central role in the theory of linear logic. We show that such a pattern applies uniformly in call-by-name, call-by-value, and call-by-need, catching many machines in the literature. We start by distilling the KAM, the CEK, and a sketch of the ZINC, and then provide simplified versions of the SECD, the lazy KAM, and Sestoft's machine. Along the way we also introduce some new machines with global environments. Moreover, we show that distillation preserves the time complexity of the executions, i.e. the LSC is a complexity-preserving abstraction of abstract machines.
We present a call-by-need strategy for computing strong normal forms of open terms (reduction is admitted inside the body of abstractions and substitutions, and the terms may contain free variables), which guarantees that arguments are only evaluated when needed and at most once. The strategy is shown to be complete with respect to β-reduction to strong normal form. The proof of completeness relies on two key tools: (1) the definition of a strong call-by-need calculus where reduction may be performed inside any context, and (2) the use of non-idempotent intersection types. More precisely, terms admitting a β-normal form in pure lambda calculus are typable, typability implies (weak) normalisation in the strong call-by-need calculus, and weak normalisation in the strong call-by-need calculus implies normalisation in the strong call-by-need strategy. Our (strong) call-by-need strategy is also shown to be conservative over the standard (weak) call-by-need.[2002] have proposed a strong normalisation function N which they have proved to be correct, in the sense that N computes the normal form of strongly normalising, closed λ-terms. This function essentially consists in iterating the standard (weak) call-by-value (CBV) strategy on terms possibly containing free variables, Grégoire and Leroy [2002] refers to this variant of CBV as symbolic CBV.The starting point of this work is the observation that rather than iterating CBV, one should consider an appropriate notion of call-by-need (CBNd) that computes strong normal forms (of open terms). In this paper we replace symbolic CBV by symbolic CBNd, consisting in iterating the standard CBNd strategy on terms possibly containing free variables. Our strategy computes strong normal forms in which, in contrast to N , arguments are evaluated only if they are needed and, moreover, are evaluated at most once thus avoiding duplication of work. For example, the function N in Grégoire and Leroy [2002] computes the value of the argument (λz.z)(λz.z) in the λ-term (λx .λy.(λz.z)y)((λz.z)(λz.z)) even though this value is not required for the strong normal form of the whole term, whereas our strategy will not. Also, our strategy is normalising, i.e. it computes the normal form of weakly normalising terms, that is, of terms that admit a normal form but whose evaluation may diverge along some other strategies.Defining Strong Call-by-Need. Some of the subtleties involved in developing a theory of CBNd to strong normal form are illustrated next. In what follows, we write ⇝ for denoting the CBNd strategy to strong normal form devised in this paper and motivated below.Consider a term (λx .t )(id id), where id abbreviates the identity term λz.z and t is an arbitrary subterm (in Fig. 1, t is chosen to be λy.yxx). The first reduction step for a term of this shape is a common (weak) call-by-need step: the β-redex (λx .t )(id id) is turned into an explicit binding between the variable x and the argument id id in the expression t, which is often written let x = (id id) in t, or here t[x\id id], wher...
International audienceAbstract machines for the strong evaluation of λ-terms (that is, under abstractions) are a mostly neglected topic, despite their use in the implementation of proof assistants and higher-order logic programming languages. This paper introduces a machine for the simplest form of strong evaluation, leftmost-outermost (call-by-name) evaluation to normal form, proving it correct, complete, and bounding its overhead. Such a machine, deemed Strong Milner Abstract Machine, is a variant of the KAM computing normal forms and using just one global environment. Its properties are studied via a special form of decoding, called a distillation, into the Linear Substitution Calculus, neatly reformulating the machine as a standard micro-step strategy for explicit substitutions, namely linear leftmost-outermost reduction, i.e. the extension to normal form of linear head reduction. Additionally, the overhead of the machine is shown to be linear both in the number of steps and in the size of the initial term, validating its design. The study highlights two distinguished features of strong machines, namely backtracking phases and their interactions with abstractions and environments
Discrete Algorithms International audience We consider two repeat finding problems relative to sets of strings: (a) Find the largest substrings that occur in every string of a given set; (b) Find the maximal repeats in a given string that occur in no string of a given set. Our solutions are based on the suffix array construction, requiring O(m) memory, where m is the length of the longest input string, and O(n &log;m) time, where n is the the whole input size (the sum of the length of each string in the input). The most expensive part of our algorithms is the computation of several suffix arrays. We give an implementation and experimental results that evidence the efficiency of our algorithms in practice, even for very large inputs.
In typical non-idempotent intersection type systems, proof normalization is not confluent. In this paper we introduce a confluent non-idempotent intersection type system for the λ-calculus. Typing derivations are presented using proof term syntax. The system enjoys good properties: subject reduction, strong normalization, and a very regular theory of residuals. A correspondence with the λ-calculus is established by simulation theorems. The machinery of non-idempotent intersection types allows us to track the usage of resources required to obtain an answer. In particular, it induces a notion of garbage: a computation is garbage if it does not contribute to obtaining an answer. Using these notions, we show that the derivation space of a λ-term may be factorized using a variant of the Grothendieck construction for semilattices. This means, in particular, that any derivation in the λ-calculus can be uniquely written as a garbage-free prefix followed by garbage.In this case, the space of computations is quite easy to understand, because the subexpressions (1 + 1) and (2 * 3 + 1) cannot interact with each other. Indeed, the ⋆ Work partially supported by CONICET. R 2 are residuals of R, and, conversely, R is an ancestor of R 1 and R 2 . The second phenomenon is erasure: in the diagram above, the step T erases the step R ′ 1 , resulting in no copies of R ′ 1 . The third phenomenon is creation: in the diagram above, the step R 2 creates the step T , meaning that T is not a residual of a step that existed prior to executing R 2 ; that is, T has no ancestor.These three interaction phenomena, especially duplication and erasure, are intimately related with the management of resources. In this work, we aim to explore the hypothesis that having an explicit representation of resource management may provide insight on the structure of derivation spaces.There are many existing λ-calculi that deal with resource management explicitly [6,15,20,21], most of which draw inspiration from Girard's Linear Logic [18]. Recently, calculi endowed with non-idempotent intersection type systems, have received some attention [14,5,7,8,19,34,22]. These type systems are able to statically capture non-trivial dynamic properties of terms, particularly normalization, while at the same time being amenable to elementary proof techniques by induction. Intersection types were originally proposed by Coppo and Dezani-Ciancaglini [11] to study termination in the λ-calculus. They are characterized by the presence of an intersection type constructor A ∩ B. Non-idempotent intersection type systems are distinguished from their usual idempotent counterparts by the fact that intersection is not declared to be idempotent, i.e. A and A ∩ A are not equivalent types. Rather, intersection behaves like a multiplicative connective in linear logic. Arguments to functions are typed many times, typically once per each time that the argument will be used. Non-idempotent intersection types were originally formulated by Gardner [17], and later reintroduced by de Carvalho [9].I...
We study superdevelopments in the weak lambda calculus of Cagman and Hindley, a confluent variant of the standard weak lambda calculus in which reduction below lambdas is forbidden. In contrast to developments, a superdevelopment from a term M allows not only residuals of redexes in M to be reduced but also some newly created ones. In the lambda calculus there are three ways new redexes may be created; in the weak lambda calculus a new form of redex creation is possible. We present labeled and simultaneous reduction formulations of superdevelopments for the weak lambda calculus and prove them equivalent
We study a conservative extension of classical propositional logic distinguishing between four modes of statement: a proposition may be affirmed or denied, and it may be strong or classical. Proofs of strong propositions must be constructive in some sense, whereas proofs of classical propositions proceed by contradiction. The system, in natural deduction style, is shown to be sound and complete with respect to a Kripke semantics. We develop the system from the perspective of the propositions-as-types correspondence by deriving a term assignment system with confluent reduction. The proof of strong normalization relies on a translation to System F with Mendler-style recursion.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.