Proceedings of the 2nd Workshop on Representation Learning for NLP 2017
DOI: 10.18653/v1/w17-2609
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge Base Completion: Baselines Strike Back

Abstract: Many papers have been published on the knowledge base completion task in the past few years. Most of these introduce novel architectures for relation learning that are evaluated on standard datasets such as FB15k and WN18. This paper shows that the accuracy of almost all models published on the FB15k can be outperformed by an appropriately tuned baseline -our reimplementation of the DistMult model. Our findings cast doubt on the claim that the performance improvements of recent models are due to architectural … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
120
0

Year Published

2017
2017
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 125 publications
(132 citation statements)
references
References 28 publications
(18 reference statements)
4
120
0
Order By: Relevance
“…Any KBE model can be used for learning M. For evaluation, we adopt DistMult (Yang et al, 2014) for its state-of-the art performance over many other KBE models (Kadlec et al, 2017). The scoring function of DistMult is defined as follows:…”
Section: Inference Modelmentioning
confidence: 99%
“…Any KBE model can be used for learning M. For evaluation, we adopt DistMult (Yang et al, 2014) for its state-of-the art performance over many other KBE models (Kadlec et al, 2017). The scoring function of DistMult is defined as follows:…”
Section: Inference Modelmentioning
confidence: 99%
“…Knolwedge Graph Embeddings: To obtain the embedding from this graph, we use state-ofthe-art DistMult model (Bishan Yang and Deng, 2015). The choice is inspired by Kadlec et al (2017), which reports that an appropriately tuned DistMult model can produce similar or better performance while compared with the competing knowledge graph embedding models. Dist-Mult model embeds entities (nodes) and relations (edges) as vectors.…”
Section: Proposed Approachmentioning
confidence: 99%
“…It has also been shown that the presence of relations between candidate pairs can be an extremely strong signal in some cases [31]. Moreover, recent works showed that a hyperparameter tuning has been overlooked and that a simple method, such as DistMult, can achieve state-of-the-art performance when well tuned [10].…”
Section: Related Workmentioning
confidence: 99%