2019
DOI: 10.48550/arxiv.1904.12211
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Soft Marginal TransE for Scholarly Knowledge Graph Completion

Abstract: Knowledge graphs (KGs), i.e. representation of information as a semantic graph, provide a significant test bed for many tasks including question answering, recommendation, and link prediction. Various amount of scholarly metadata have been made available as knowledge graphs from the diversity of data providers and agents. However, these high-quantities of data remain far from quality criteria in terms of completeness while growing at a rapid pace. Most of the attempts in completing such KGs are following tradi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
2
2

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(9 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…We generate 100 minibatches in each iteration. The hyperparameter corresponding to the score function is embedding dimension d. We add slack variables to the losses 4 and 6 to have soft margin as in (Nayyeri et al, 2019). The loss 6 is rewritten as follows (Nayyeri et al, 2019):…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…We generate 100 minibatches in each iteration. The hyperparameter corresponding to the score function is embedding dimension d. We add slack variables to the losses 4 and 6 to have soft margin as in (Nayyeri et al, 2019). The loss 6 is rewritten as follows (Nayyeri et al, 2019):…”
Section: Methodsmentioning
confidence: 99%
“…(4) Condition (c) considers a triple to be positive if its residual vector lies inside a hyper-sphere with radius γ 1 . The optimization problem that satisfies the condition (c) is as follows (Nayyeri et al, 2019):…”
Section: Reinvestigation Of the Limitations Of Translation-based Embe...mentioning
confidence: 99%
“…It is represented as limited-based scoring loss illustrated in Figure 1. In this way, the scores of positive samples are forced to stay before the upper bound which signi cantly improves the performance of translation-based KGE models [2,21,29]. Zhou et al [29] revises the MRL by adding a term ([f r (h, t) − γ 1 ] + ) to limit maximum value of positive score:…”
Section: Limited-based Scoring Lossmentioning
confidence: 99%
“…A modi ed version of the two previous loss functions is introduced in our previous work [21]. is approach xes the upper-bound of positive samples (γ 1 ) and uses a sliding mechanism to move false negative samples towards positive samples, shown as So Margin in Figure 1.…”
Section: So Marginmentioning
confidence: 99%
“…Because the network security knowledge graph is sparse, representation learning in the network security field also faces insufficient structural information [15,16]. It has been shown that there are three main issues in current representation learning.…”
Section: Introductionmentioning
confidence: 99%