2021
DOI: 10.3390/info12040147
|View full text |Cite
|
Sign up to set email alerts
|

On Training Knowledge Graph Embedding Models

Abstract: Knowledge graph embedding (KGE) models have become popular means for making discoveries in knowledge graphs (e.g., RDF graphs) in an efficient and scalable manner. The key to success of these models is their ability to learn low-rank vector representations for knowledge graph entities and relations. Despite the rapid development of KGE models, state-of-the-art approaches have mostly focused on new ways to represent embeddings interaction functions (i.e., scoring functions). In this paper, we argue that the cho… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
20
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(20 citation statements)
references
References 26 publications
0
20
0
Order By: Relevance
“…The square error loss, binary cross entropy loss (BCEL), pointwise hinge loss, and logistic loss. a) Square Error Loss: The square error loss function computes the squared difference between the predicted scores and the labels l i ∈ {0, 1} [7]:…”
Section: Loss Functionsmentioning
confidence: 99%
See 2 more Smart Citations
“…The square error loss, binary cross entropy loss (BCEL), pointwise hinge loss, and logistic loss. a) Square Error Loss: The square error loss function computes the squared difference between the predicted scores and the labels l i ∈ {0, 1} [7]:…”
Section: Loss Functionsmentioning
confidence: 99%
“…The loss penalizes scores of positive examples which are smaller than λ, but does not impose any restriction on values > λ. Similarly, negative scores larger than −λ contribute to the loss, whereas all values smaller than −λ do not have any loss contribution [7]. Thereby, the model is not encouraged to further optimize triples which are already predicted well enough (according to the margin parameter λ).…”
Section: Loss Functionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Taken from the link prediction literature, we use "pairwise logistic loss" [47]. Given a positive pair (A, B), we randomly sample a new company C to create a negative pair (A, C).…”
Section: Model Trainingmentioning
confidence: 99%
“…Work on KGE models usually define loss functions specific to the models. However, as show in [48,53] the choice of loss function has a huge impact on model performance. In this work we use four loss functions.…”
Section: Appendix a Knowledge Graph Embedding Modelsmentioning
confidence: 99%