The Edinburgh Logical Framework (LF) provides a means to define (or present) logics. It is based on a general treatment of syntax, rules, and proofs by means of a typed λ-calculus with dependent types. Syntax is treated in a style similar to, but more general than, Martin-Löf's system of arities. The treatment of rules and proofs focuses on his notion of a judgement. Logics are represented in LF via a new principle, the judgements as types principle, whereby each judgement is identified with the type of its proofs. This allows for a smooth treatment of discharge and variable occurrence conditions and leads to a uniform treatment of rules and proofs whereby rules are viewed as proofs of higher-order judgements and proof checking is reduced to type checking. The practical benefit of our treatment of formal systems is that logic-independent tools such as proof editors and proof checkers can be constructed.
Traditional techniques for implementing polymorphism use a universal representation for objects of unknown type. Often, this forces a compiler to use universal representations even if the types of objects are known. We examine an alternative approach for compiling polymorphism where types are passed as arguments to polymorphic routines in order to determine the representation of an object. This approach allows monomorphic code to use natural, e cient representations, supports separate compilation of polymorphic denitions and, unlike coercion-based implementations of polymorphism, natural representations can be used for mutable objects such as refs and arrays. We are particularly interested in the typing properties of an intermediate language that allows run-time type analysis to be coded within the language. This allows us to compile many representation transformations and many language features without adding new primitive operations to the language. In this paper, we provide a core target language where type-analysis operators can be coded within the language and the types of such operators can be accurately tracked. The target language is powerful enough to code a variety of useful features, yet type checking remains decidable. We show how to translate an ML-like language into the target language so that primitive operators can analyze types to produce e cient representations. We demonstrate the power of the \user-level" operators by coding attened tuples, marshalling, type classes, and a form of type dynamic within the language.
Decidability of definitional equality and conversion of terms into canonical form play a central role in the meta-theory of a type-theoretic logical framework. Most studies of definitional equality are based on a confluent, strongly-normalizing notion of reduction. Coquand has considered a different approach, directly proving the correctness of a practical equivalence algorithm based on the shape of terms. Neither approach appears to scale well to richer languages with unit types or subtyping, and neither directly addresses the problem of conversion to canonical form. In this paper we present a new, type-directed equivalence algorithm for the LF type theory that overcomes the weaknesses of previous approaches. The algorithm is practical, scales to more expressive languages, and yields a new notion of canonical form sufficient for adequate encodings of logical systems. The algorithm is proved complete by a Kripke-style logical relations argument similar to that suggested by Coquand. Crucially, both the algorithm itself and the logical relations rely only on the shapes of types, ignoring dependencies on terms. IntroductionAt present the mechanization of constructive reasoning relies almost entirely on type theories of various forms. The principal reason is that the computational meaning of constructive proofs is an integral part of the type theory itself. The main computational mechanism in such type theories is reduction, which has therefore been studied extensively.For logical frameworks the case for type theoretic meta-languages is also compelling, since they allow us to internalize deductions as objects. The validity of a deduction is then verified by type-checking in the meta-language. To ensure that proof checking remains decidable under this representation, the type checking problem for the meta-language must also be decidable. To support deductive systems of practical interest, the type theory must support dependent types, that is, types that depend on objects.The correctness of the representation of a logic in type theory is given by an adequacy theorem that correlates the syntax and deductions of the logic with canonical forms of suitable type. To establish a precise correspondence, canonical forms are taken to be β-normal, η-long forms. In particular, it is important that canonical forms enjoy the property that constants and variables of higher type are "fully applied" -that is, each occurrence is applied to enough arguments to reach a base type.Thus we see that the methodology of logical frameworks relies on two fundamental metatheoretic results: the decidability of type checking, and the existence of canonical forms. For many type theories the decidability of type checking is easily seen to reduce to the decidability of definitional equality of types and terms. Therefore we focus attention on the decision problem for definitional equality and on the conversion of terms to canonical form.Traditionally, both problems have been treated by considering normal forms for β, and possibly η, reduction. If we ta...
Types are the central organizing principle of the theory of programming languages. In this innovative book, Professor Robert Harper offers a fresh perspective on the fundamentals of these languages through the use of type theory. Whereas most textbooks on the subject emphasize taxonomy, Harper instead emphasizes genetics, examining the building blocks from which all programming languages are constructed. Language features are manifestations of type structure. The syntax of a language is governed by the constructs that define its types, and its semantics is determined by the interactions among those constructs. The soundness of a language design – the absence of ill-defined programs – follows naturally. Professor Harper's presentation is simultaneously rigorous and intuitive, relying on elementary mathematics. The framework he outlines scales easily to a rich variety of language concepts and is directly applicable to their implementation. The result is a lucid introduction to programming theory that is both accessible and practical.
The LF logical framework codifies a methodology for representing deductive systems, such as programming languages and logics, within a dependently typed λ-calculus. In this methodology, the syntactic and deductive apparatus of a system is encoded as the canonical forms of associated LF types; an encoding is correct (adequate) if and only if it defines a compositional bijection between the apparatus of the deductive system and the associated canonical forms. Given an adequate encoding, one may establish metatheoretic properties of a deductive system by reasoning about the associated LF representation. The Twelf implementation of the LF logical framework is a convenient and powerful tool for putting this methodology into practice. Twelf supports both the representation of a deductive system and the mechanical verification of proofs of metatheorems about it.The purpose of this paper is to provide an up-to-date overview of the LF λ-calculus, the LF methodology for adequate representation, and the Twelf methodology for mechanizing metatheory. We begin by defining a variant of the original LF language, called Canonical LF, in which only canonical forms (long βη-normal forms) are permitted. This variant is parameterized by a subordination relation, which enables modular reasoning about LF representations. We then give an adequate representation of a simply typed λ-calculus in Canonical LF, both to illustrate adequacy and to serve as an object of analysis. Using this representation, we formalize and verify the proofs of some metatheoretic results, including preservation, determinacy, and strengthening. Each example illustrates a significant aspect of using LF and Twelf for formalized metatheory.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.