2014
DOI: 10.1017/s1471068413000689
|View full text |Cite
|
Sign up to set email alerts
|

Structure learning of probabilistic logic programs by searching the clause space

Abstract: Learning probabilistic logic programming languages is receiving an increasing attention and systems are available for learning the parameters (PRISM, LeProbLog, LFI-ProbLog and EMBLEM) or both the structure and the parameters (SEM-CP-logic and SLIPCASE) of these languages. In this paper we present the algorithm SLIPCOVER for "Structure LearnIng of Probabilistic logic programs by searChing OVER the clause space". It performs a beam search in the space of probabilistic clauses and a greedy search in the space of… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
55
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 53 publications
(55 citation statements)
references
References 60 publications
0
55
0
Order By: Relevance
“…Although there has been significant progress in the field of PILP [25,[49][50][51]26] for learning annotated Prolog programs, PILP under the answer set semantics is still relatively young, and thus, there are few approaches. PrASP [52,53,27] considers the problem of weight learning, and in fact uses a similar example of learning about coins.…”
Section: Relation To Probabilistic Ilpmentioning
confidence: 99%
“…Although there has been significant progress in the field of PILP [25,[49][50][51]26] for learning annotated Prolog programs, PILP under the answer set semantics is still relatively young, and thus, there are few approaches. PrASP [52,53,27] considers the problem of weight learning, and in fact uses a similar example of learning about coins.…”
Section: Relation To Probabilistic Ilpmentioning
confidence: 99%
“…The results are shown in Figure 3: neither FOIL nor Alchemy's MLN method 3 outperform the simple baseline of predicting exactly the facts in the incomplete database. 4 2 Or in FOIL's case, an approach broadly similar to pseudolikelihood. 3 Alchemy's performance is quite sensitive to the precise set of missing facts, so we average over ten runs in the figure.…”
Section: Structure Learning Is Difficult For Kb Completionmentioning
confidence: 99%
“…3 Alchemy's performance is quite sensitive to the precise set of missing facts, so we average over ten runs in the figure. 4 Note that we have also experimented with a more recent "Learning with Structural Motifs (LSM)" variant [15] for learning MLN, but the results were much worse than Alchemy: we only observe a MAP of 10.7 on the missing 5% setting. This is because LSM is designed to learn long…”
Section: Structure Learning Is Difficult For Kb Completionmentioning
confidence: 99%
See 1 more Smart Citation
“…Each refinement is scored by learning the parameters with EMBLEM and using the LL of the examples returned by it. SLIPCOVER (Bellodi and Riguzzi 2014) differs from SLIPCASE because the beam search is performed in the space of clauses. In this way, a set of promising clauses is identified and these are added one by one to the empty theory, keeping each clause if the LL improves.…”
Section: Introductionmentioning
confidence: 99%