Proceedings of the Eighteenth International Conference on Principles of Knowledge Representation and Reasoning 2021
DOI: 10.24963/kr.2021/45
|View full text |Cite
|
Sign up to set email alerts
|

Approximate Inference for Neural Probabilistic Logic Programming

Abstract: DeepProbLog is a neural-symbolic framework that integrates probabilistic logic programming and neural networks. It is realized by providing an interface between the probabilistic logic and the neural networks. Inference in probabilistic neural symbolic methods is hard, since it combines logical theorem proving with probabilistic inference and neural network evaluation. In this work, we make the inference more efficient by extending an approximate inference algorithm from the field of statistical-relatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(8 citation statements)
references
References 20 publications
0
1
0
Order By: Relevance
“…Approximate knowledge compilation is the field of research that deals with tackling this issue. While it contains interesting recent work [Fierens et al, 2015, Huang et al, 2021, Manhaeve et al, 2021b, it was highlighted by Manhaeve et al that the introduction of the neural paradigm does lead to further complications. As such, we opted for exact knowledge compilation, but it has to be noted that we will be able to benefit from any future advances in the field of approximate inference.…”
Section: H Limitationsmentioning
confidence: 99%
“…Approximate knowledge compilation is the field of research that deals with tackling this issue. While it contains interesting recent work [Fierens et al, 2015, Huang et al, 2021, Manhaeve et al, 2021b, it was highlighted by Manhaeve et al that the introduction of the neural paradigm does lead to further complications. As such, we opted for exact knowledge compilation, but it has to be noted that we will be able to benefit from any future advances in the field of approximate inference.…”
Section: H Limitationsmentioning
confidence: 99%
“…We study three Neurosymbolic reasoning tasks to evaluate the performance and scalability of A-NeSI: Multi-digit MNISTAdd [139], Visual Sudoku Puzzle Classification [10] and Warcraft path planning. Code is available at https://github.com/HEmile/a-nesi.…”
Section: Methodsmentioning
confidence: 99%
“…y i and w i are one-hot encoded digits, except for the first output digit y 1 : it can only be 0 or 1. We used a shared set of hyperparameters for all N. Like [138,139], we take the MNIST [122] dataset and use each digit exactly once to create data. We follow [138] and require more unique digits for increasing N. Therefore, the training dataset will be of size 60000/2N and the test dataset of size 10000/2N.…”
Section: Multi-digit Mnistaddmentioning
confidence: 99%
See 2 more Smart Citations