Proceedings of the 23rd ACM International Conference on Conference on Information and Knowledge Management 2014
DOI: 10.1145/2661829.2662022
|View full text |Cite
|
Sign up to set email alerts
|

Structure Learning via Parameter Learning

Abstract: A key challenge in information and knowledge management is to automatically discover the underlying structures and patterns from large collections of extracted information. This paper presents a novel structure-learning method for a new, scalable probabilistic logic called ProPPR. Our approach builds on the recent success of meta-interpretive learning methods in Inductive Logic Programming (ILP), and we further extends it to a framework that enables robust and efficient structure learning of logic programs on … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
20
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(20 citation statements)
references
References 28 publications
0
20
0
Order By: Relevance
“…Metarules (Cropper and Tourret 2020) are another popular syntactic bias used by many ILP approaches (De Raedt and Bruynooghe 1992;Wang et al 2014;Albarghouthi et al 2017;Kaminski et al 2018), including Metagol (Muggleton et al 2015;Cropper and Muggleton 2016) and, to an extent 5 , ∂ILP (Evans and Grefenstette 2018). A metarule is a higher-order clause which defines the exact form of clauses in the hypothesis space.…”
Section: Language Biasmentioning
confidence: 99%
“…Metarules (Cropper and Tourret 2020) are another popular syntactic bias used by many ILP approaches (De Raedt and Bruynooghe 1992;Wang et al 2014;Albarghouthi et al 2017;Kaminski et al 2018), including Metagol (Muggleton et al 2015;Cropper and Muggleton 2016) and, to an extent 5 , ∂ILP (Evans and Grefenstette 2018). A metarule is a higher-order clause which defines the exact form of clauses in the hypothesis space.…”
Section: Language Biasmentioning
confidence: 99%
“…Recently, researchers (Rocktäschel & Riedel, 2017) have proposed a "differentiable theorem prover", in which a proof for an example is unrolled into a network. Their system includes representation-learning as a component, as well as a template-instantiation approach like that of (Wang, Mazaitis, & Cohen, 2014), allowing structure learning. However, unlike the case for TensorLog, the proof procedure can produce very large proof trees, leading to proofs (and neural networks) that are of size exponential in the KB.…”
Section: Hybrid Logical/neural Systemsmentioning
confidence: 99%
“…12 . A second was learning the most common relation, "affects", from UMLS, using a rule set learned by the algorithm of (Wang et al, 2014). Also, motivated by comparisons between ProPPR and embedding-based approaches to knowledge-base completion (Wang & Cohen, 2016), we compared to ProPPR on two relation-prediction tasks involving WordNet, using rules from the non-recursive theories used in (Wang & Cohen, 2016).…”
Section: Small Relational Learning Tasksmentioning
confidence: 99%
“…Early and more recent work on learning structures [25] and use of background knowledge included advanced in Inductive Logic Programming (ILP) and hierarchical learning methods. In the ILP, the background knowledge is defined as a set of relations (predicates) that can be used in the definition of the target concept [26].…”
Section: Supervised Learningmentioning
confidence: 99%