2023
DOI: 10.1007/978-3-031-30044-8_20
|View full text |Cite|
|
Sign up to set email alerts
|

Automatic Alignment in Higher-Order Probabilistic Programming Languages

Abstract: Probabilistic Programming Languages (PPLs) allow users to encode statistical inference problems and automatically apply an inference algorithm to solve them. Popular inference algorithms for PPLs, such as sequential Monte Carlo (SMC) and Markov chain Monte Carlo (MCMC), are built around checkpoints—relevant events for the inference algorithm during the execution of a probabilistic program. Deciding the location of checkpoints is, in current PPLs, not done optimally. To solve this problem, we present a static a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 36 publications
(79 reference statements)
0
0
0
Order By: Relevance
“…In the left side of Fig. CorePPL then adds code for the selected inference strategy: one of importance sampling, bootstrap particle filter, alive particle filter [30], aligned lightweight MCMC [29,43], trace MCMC, naive MCMC, or particle MCMC-particle independent Metropolis-Hastings (PMCMC-PIMH, [44]). The code obtained in this way is converted either into C++ (upcoming feature) or pure Miking code, which is subsequently processed by nvcc or mi to obtain respectively a CUDA-enabled executable (upcoming feature) or a regular executable.…”
Section: Workflow and Architecturementioning
confidence: 99%
See 4 more Smart Citations
“…In the left side of Fig. CorePPL then adds code for the selected inference strategy: one of importance sampling, bootstrap particle filter, alive particle filter [30], aligned lightweight MCMC [29,43], trace MCMC, naive MCMC, or particle MCMC-particle independent Metropolis-Hastings (PMCMC-PIMH, [44]). The code obtained in this way is converted either into C++ (upcoming feature) or pure Miking code, which is subsequently processed by nvcc or mi to obtain respectively a CUDA-enabled executable (upcoming feature) or a regular executable.…”
Section: Workflow and Architecturementioning
confidence: 99%
“…Factorizing the likelihood. Since observe-type statements indicate a potential likelihood update (a checkpoint-see [29] for an in-depth discussion), another way of thinking about TreePPL (and PPLs in general) is that it provides a way of specifying the likelihood function L(θ; y) in a programmatic way, instead of using a closed-form expression. In fact, we do not have to explicitly construct the data distribution and then observe from it-we can instead directly factor in the likelihood via weight or logWeight as we have shown on lines 4 (ensuring that the side-branch dies) and line 19 (correcting for the rotation factor of the tree).…”
Section: Optimizations and Inferencementioning
confidence: 99%
See 3 more Smart Citations