Proceedings of the Twenty-Seventh International Joint Conference on Artificial Intelligence 2018
DOI: 10.24963/ijcai.2018/297
|View full text |Cite
|
Sign up to set email alerts
|

Scalable Rule Learning via Learning Representation

Abstract: We study the problem of learning first-order rules from large Knowledge Graphs (KGs). With recent advancement in information extraction, vast data repositories in the KG format have been obtained such as Freebase and YAGO. However, traditional techniques for rule learning are not scalable for KGs. This paper presents a new approach RLvLR to learning rules from KGs by using the technique of embedding in representation learning together with a new sampling method. Experimental results show that our system outper… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(12 citation statements)
references
References 5 publications
0
12
0
Order By: Relevance
“…Rule-based reasoning can be divided into two categories, namely neuralbased models and rule mining models. Among them, neural-based models ( Rocktäschel and Riedel, 2017;Sadeghian et al, 2019;Minervini et al, 2020)) give the corresponding rules while performing triple completion, while rule mining models ( (Galárraga et al, 2015;Omran et al, 2018;Ho et al, 2018;Meilicke et al, 2019)) first mine the rules and then use them for completion.…”
Section: Rule-based Reasoningmentioning
confidence: 99%
“…Rule-based reasoning can be divided into two categories, namely neuralbased models and rule mining models. Among them, neural-based models ( Rocktäschel and Riedel, 2017;Sadeghian et al, 2019;Minervini et al, 2020)) give the corresponding rules while performing triple completion, while rule mining models ( (Galárraga et al, 2015;Omran et al, 2018;Ho et al, 2018;Meilicke et al, 2019)) first mine the rules and then use them for completion.…”
Section: Rule-based Reasoningmentioning
confidence: 99%
“…The quality of rule learning is evaluated with the number of high quality rule (HQr) and their percentage. The quality of rules are evaluated by head coverage(HC) which is commonly used in pervious work, such as [10] and [29]. Head 5 we run AMIE+ code from https://www.mpi-inf.mpg.de/departments/databases-andinformation-systems/research/yago-naga/amie/ coverage for rule rul is defined as follows: Table 6.…”
Section: Rule Evaluationmentioning
confidence: 99%
“…Embeddings are also used to help rule learning and are used for different purposes. Some utilize embeddings to guide and prune the search for candidate rules [29]. Some make embeddings to complete the knowledge graph during rule learning [16].…”
Section: Rule Learningmentioning
confidence: 99%
“…Consistency checking depends on domain knowledge of a specific task for constraint and rule definition, while the mined constraints and rules are often weak in modeling local context for disambiguation. Semantic embedding methods are good at modeling contextual semantics in a vector space, but are computationally expensive when learning from large KBs [30] and suffer from low robustness when dealing with real world KBs that are often noisy and sparse [35].…”
Section: Assertion Validationmentioning
confidence: 99%