2019
DOI: 10.1145/3341702
|View full text |Cite
|
Sign up to set email alerts
|

From high-level inference algorithms to efficient code

Abstract: Probabilistic programming languages are valuable because they allow domain experts to express probabilistic models and inference algorithms without worrying about irrelevant details. However, for decades there remained an important and popular class of probabilistic inference algorithms whose efficient implementation required manual low-level coding that is tedious and error-prone. They are algorithms whose idiomatic expression requires random array variables that are latent or whose likelihood is conjugate. A… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 12 publications
(8 citation statements)
references
References 44 publications
(61 reference statements)
0
8
0
Order By: Relevance
“…Performing simplification Whereas disintegration seeks any program whose denoted measure differs from the given program in accordance with a semantic specification, simplification seeks a program whose denoted measure is same as the given program but whose efficiency or readability is improved. Our work is thus complementary to the work on simplification by , Gehr et al [2016], and Walia et al [2019]: the result of disintegration can be improved by simplification while preserving correctness, and it may also be possible to ease disintegration by first simplifying its input.…”
Section: Gibbs Samplingmentioning
confidence: 97%
See 1 more Smart Citation
“…Performing simplification Whereas disintegration seeks any program whose denoted measure differs from the given program in accordance with a semantic specification, simplification seeks a program whose denoted measure is same as the given program but whose efficiency or readability is improved. Our work is thus complementary to the work on simplification by , Gehr et al [2016], and Walia et al [2019]: the result of disintegration can be improved by simplification while preserving correctness, and it may also be possible to ease disintegration by first simplifying its input.…”
Section: Gibbs Samplingmentioning
confidence: 97%
“…Further, symbolic computation enables the user to specify an observation or proposal by applying deterministic operations such as square root and addition to the outcome of random choices. Besides disintegration and its special cases, other operations on distributions have also received exact, symbolic automation, in particular simplifying the representation of a distribution while preserving its meaning Gehr et al 2016;Walia et al 2019]. This paper presents automatic program transformations that perform disintegration.…”
Section: Introductionmentioning
confidence: 99%
“…The models for LDA [Blei et al 2003;Griffiths and Steyvers 2004] and DMM [Holmes et al 2012] are popular for existing data science problems. The models for GMM [Daniel Huang 2017;Walia et al 2019], LDA [Daniel Huang 2017;Walia et al 2019], and DMM [Walia et al 2019] have been used as benchmarks for probabilistic inference systems. Gibbs sampling [Geman and Geman 1984], Metropolis-Hastings [Hastings 1970;Metropolis et al 1953], andLikelihood Weighting [Fung andChang 1989] are all widely used inference algorithms in the literature.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The primary model is shown below. In the parameter declaration for error_probs, we use the syntax error_probs[_] ∼ beta (10,50) to introduce a collection of parameters; the declared variable becomes a dictionary, and each time it is used with a new index, a new parameter is instantiated. We use this to learn a different error_prob parameter for each tracking website.…”
Section: A2 Flightsmentioning
confidence: 99%
“…There is no fundamental reason why this must be the case: for particular models and in particular data regimes, it is often possible to develop efficient algorithms that yield accurate results quickly in practice (even if existing theory cannot accurately characterize the regimes in which they work well). But little tooling exists for deriving these fast algorithms, or for implementing them using efficient data structures and computation strategies (though see [3,18,28,50] for some work in this direction). Compare this to the state-of-the-art in deep learning, in which specialized hardware, software libraries, and compilers help to ensure that compute-intensive training algorithms can be run in a reasonable amount of time, with little or no performance engineering by the user.…”
Section: Introductionmentioning
confidence: 99%