2019
DOI: 10.1017/s0956796818000217
|View full text |Cite
|
Sign up to set email alerts
|

How to evaluate the performance of gradual type systems

Abstract: A sound gradual type system ensures that untyped components of a program can never break the guarantees of statically typed components. This assurance relies on runtime checks, which in turn impose performance overhead in proportion to the frequency and nature of interaction between typed and untyped components. The literature on gradual typing lacks rigorous descriptions of methods for measuring the performance of gradual type systems. This gap has consequences for the implementors of gradual type systems and… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
2
1

Relationship

2
3

Authors

Journals

citations
Cited by 16 publications
(22 citation statements)
references
References 64 publications
0
22
0
Order By: Relevance
“…That is, for these benchmarks on transient type checks, measuring just the untyped and fullytyped configurations would provide excellent estimates of a benchmark's performance bounds. This is different to the experience of other kinds of gradual typing, where the best, and most importantly worst, configurations are not always those fully typed or fully untyped [16]. However, the Richards benchmark does have sections outside these bounds, and isolated executions of a couple of others (Fannkuch and DeltaBlue) are also outliers.…”
Section: Performance Of Benchmark Configurationsmentioning
confidence: 73%
See 2 more Smart Citations
“…That is, for these benchmarks on transient type checks, measuring just the untyped and fullytyped configurations would provide excellent estimates of a benchmark's performance bounds. This is different to the experience of other kinds of gradual typing, where the best, and most importantly worst, configurations are not always those fully typed or fully untyped [16]. However, the Richards benchmark does have sections outside these bounds, and isolated executions of a couple of others (Fannkuch and DeltaBlue) are also outliers.…”
Section: Performance Of Benchmark Configurationsmentioning
confidence: 73%
“…Our goal is to identify which type annotations in Grace programs cause performance effects. To this end, we built upon the so-called "Takikawa" or "Takikawa-Greenman" evaluation protocol [16,36]. It uses 2 N configurations of each benchmark.…”
Section: Experimental Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…First, Natural is best for developers who wish to get precise blame information, and it obviously suffers from high overhead. As Greenman et al [2019b] point out, the latter is partly due to the wrappers needed to protect typed values from bad untyped code and untyped values from bad types.…”
Section: Discussion: the Trade-off Between The Precision And Cost Of mentioning
confidence: 99%
“…Second, these checks must enforce the behaviors allowed by the types. The implementation of complete monitoring demands a mechanism for tracking types, something that is occasionally impossible [Vitousek et al 2017] and always expensive [Allende et al 2013;Greenman et al 2019b;Takikawa et al 2015]. Studying typed-untyped interaction from the perspective of complete monitoring, though, suggests weaker properties and a compromise.…”
Section: Type Soundness Is Not Enoughmentioning
confidence: 99%