2019 IEEE 35th International Conference on Data Engineering (ICDE) 2019
DOI: 10.1109/icde.2019.00061
|View full text |Cite
|
Sign up to set email alerts
|

NSCaching: Simple and Efficient Negative Sampling for Knowledge Graph Embedding

Abstract: Knowledge graph (KG) embedding is a fundamental problem in data mining research with many real-world applications. It aims to encode the entities and relations in the graph into low dimensional vector space, which can be used for subsequent algorithms. Negative sampling, which samples negative triplets from non-observed ones in the training data, is an important step in KG embedding. Recently, generative adversarial network (GAN), has been introduced in negative sampling. By sampling negative triplets with lar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
96
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 104 publications
(99 citation statements)
references
References 26 publications
1
96
0
Order By: Relevance
“…In training process, recent works mostly utilized random sampling, which treats each samples equally. However, in fact the importance of different samples are unequal [43], [45], [47]. Especially, the samples owning larger error between model's prediction values and ground truth, would play more important roles in performance evaluation.…”
Section: H Training and Time Complexitymentioning
confidence: 99%
“…In training process, recent works mostly utilized random sampling, which treats each samples equally. However, in fact the importance of different samples are unequal [43], [45], [47]. Especially, the samples owning larger error between model's prediction values and ground truth, would play more important roles in performance evaluation.…”
Section: H Training and Time Complexitymentioning
confidence: 99%
“…Adversarial training is going on between the generator and the discriminator to optimize final knowledge representations. Reinforcement learning is required for training GAN [18]. The framework can be performed on various KRL models as it is independent of the specific form of the discriminator [16].…”
Section: Gan-based Samplingmentioning
confidence: 99%
“…High-quality negative samples tend to get high plausibility measured by scoring functions. Motivated by the skewed score distribution of negative samples, Zhang et al [18]attempts to only track helpful and rare negatives of high plausibility with cache. NSCaching can be considered to be in the same group of GAN-based methods since they all parametrize the dynamic distribution of negative samples.…”
Section: Nscachingmentioning
confidence: 99%
See 1 more Smart Citation
“…Networks, aka graphs, are structures that naturally capture relations between entities in data domains and information systems. Many applications, however, analyse not one, but multiple networks, e.g., for data discovery [12,26], social network analytics [33], knowledge graph reconciliation [50], and pattern matching in protein networks [29]. Network alignment, the task of pairing nodes between two isomorphic or nearisomorphic networks such that the paired nodes are similar w.r.t.…”
Section: Introductionmentioning
confidence: 99%