2018
DOI: 10.1017/s1471068418000273
|View full text |Cite
|
Sign up to set email alerts
|

Constraint-Based Inference in Probabilistic Logic Programs

Abstract: Probabilistic Logic Programs (PLPs) generalize traditional logic programs and allow the encoding of models combining logical structure and uncertainty. In PLP, inference is performed by summarizing the possible worlds which entail the query in a suitable data structure, and using this data structure to compute the answer probability. Systems such as ProbLog, PITA, etc., use propositional data structures like explanation graphs, BDDs, SDDs, etc., to represent the possible worlds. While this approach saves infer… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2019
2019
2020
2020

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(15 citation statements)
references
References 18 publications
(19 reference statements)
0
15
0
Order By: Relevance
“…In Metropolis Hastings sampling, a Markov chain is built by taking an initial sample and, starting from this sample, by generating successors samples. Here we consider the algorithm developed in [9] and implemented in cplint [14]. Algorithm 3 goes as follows: 1) it samples random choices so that the evidence is true to build an initial sample.…”
Section: Metropolis Hastingsmentioning
confidence: 99%
“…In Metropolis Hastings sampling, a Markov chain is built by taking an initial sample and, starting from this sample, by generating successors samples. Here we consider the algorithm developed in [9] and implemented in cplint [14]. Algorithm 3 goes as follows: 1) it samples random choices so that the evidence is true to build an initial sample.…”
Section: Metropolis Hastingsmentioning
confidence: 99%
“…In rejection sampling [VN51], you first query the evidence and, if the query is successful, query the goal in the same sample, otherwise the sample is discarded. In Metropolis-Hastings MCMC [NR14], a Markov chain is built by taking an initial sample and by generating successor samples.…”
Section: Inferencementioning
confidence: 99%
“…You can ask conditional queries with rejection sampling or with Metropolis-Hastings MCMC, too. In the first case, the available predicate is: In the second case, mcintyre follows the algorithm proposed in [NR14] (the non adaptive version). The initial sample is built with a backtracking meta-interpreter that starts with the goal and randomizes the order in which clauses are selected during the search so that the initial sample is unbiased.…”
Section: Inference With Cplintmentioning
confidence: 99%
See 2 more Smart Citations