2015
DOI: 10.1007/s00778-015-0394-1
|View full text |Cite
|
Sign up to set email alerts
|

Fast rule mining in ontological knowledge bases with AMIE $$+$$ +

Abstract: Recent advances in information extraction have led to huge knowledge bases (KBs), which capture knowledge in a machine-readable format. Inductive Logic Programming (ILP) can be used to mine logical rules from these KBs, such as "If two persons are married, then they (usually) live in the same city". While ILP is a mature field, mining logical rules from KBs is difficult, because KBs make an open world assumption. This means that absent information cannot be taken as counterexamples. Our approach AMIE [16] has … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
375
0

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 404 publications
(413 citation statements)
references
References 32 publications
(33 reference statements)
0
375
0
Order By: Relevance
“…Using MDL, we can approximate the optimal number of rules in a parameter-free, information-theoretic way, leading to fewer but descriptive rules. Table 3: Compression: The small % bits needed (relative to an empty model) and number of rules found by various models demonstrate the effectiveness of KGist variants at finding a concise set of rules in G. AMIE+ [17] finds Horn rules, which cannot be encoded with our model, so we only report the number of rules it finds. Freq and Coverage are baseline models that we introduce by greedily selecting from our candidate set C (without MDL) the top-k rules that (1) correctly apply the most often and (2) cover the most edges, resp.…”
Section: Rule Conciseness and Interpretabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…Using MDL, we can approximate the optimal number of rules in a parameter-free, information-theoretic way, leading to fewer but descriptive rules. Table 3: Compression: The small % bits needed (relative to an empty model) and number of rules found by various models demonstrate the effectiveness of KGist variants at finding a concise set of rules in G. AMIE+ [17] finds Horn rules, which cannot be encoded with our model, so we only report the number of rules it finds. Freq and Coverage are baseline models that we introduce by greedily selecting from our candidate set C (without MDL) the top-k rules that (1) correctly apply the most often and (2) cover the most edges, resp.…”
Section: Rule Conciseness and Interpretabilitymentioning
confidence: 99%
“…Here we quantitatively analyze the effectiveness of KGist at identifying a diverse set of anomalies, and demonstrate the interpretability of what it finds. Whereas most approaches focus on exceptional facts [51], erroneous links, erroneous node type information [35], or identification of incomplete information (e.g., link prediction) [17], KGist rules can be used to address multiple of these at once. To evaluate this, we inject anomalies of multiple types into a KG, and see how well KGist identifies them.…”
Section: [Q2] What Is Strange In a Kg?mentioning
confidence: 99%
“…Our work is also related to AMIE+ [9], a system for mining Horn rules in knowledge bases, where in each rule the body is a path and the head is a relation. Our work is orthogonal to [9] since we do not focus on mining rules, but rather on ranking the most prominent equivalences for a given relation path. The application of our method for rule mining, considering more generic rules than Horn rules, is part of our future work.…”
Section: Related Workmentioning
confidence: 99%
“…Hence, anyone who is not known to be a child of Obama is not. The validity of the PCA has been evaluated manually [6] on YAGO. For relations with generally high functionality [17], the PCA holds nearly perfectly.…”
Section: Obtaining Completeness Informationmentioning
confidence: 99%