2019
DOI: 10.3390/e21111083
|View full text |Cite
|
Sign up to set email alerts
|

Embedding Learning with Triple Trustiness on Noisy Knowledge Graph

Abstract: Embedding learning on knowledge graphs (KGs) aims to encode all entities and relationships into a continuous vector space, which provides an effective and flexible method to implement downstream knowledge-driven artificial intelligence (AI) and natural language processing (NLP) tasks. Since KG construction usually involves automatic mechanisms with less human supervision, it inevitably brings in plenty of noises to KGs. However, most conventional KG embedding approaches inappropriately assume that all facts in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 46 publications
0
4
0
Order By: Relevance
“…To tackle this issue, several studies [ 32 , 33 , 34 , 35 ] propose methods that aggregate information from neighborhood nodes for latent messages. Furthermore, some studies [ 23 , 37 ] explore the integration of external information, such as entities’ labels, to enhance KG embedding. However, obtaining label information for entities in MKGs is not always a straightforward process.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To tackle this issue, several studies [ 32 , 33 , 34 , 35 ] propose methods that aggregate information from neighborhood nodes for latent messages. Furthermore, some studies [ 23 , 37 ] explore the integration of external information, such as entities’ labels, to enhance KG embedding. However, obtaining label information for entities in MKGs is not always a straightforward process.…”
Section: Methodsmentioning
confidence: 99%
“…To address these challenges, NKRL [ 20 ] introduces the concept of negative confidence and proposes a negative sampling method for training. In addition, TransT [ 37 ] measures the confidence score via external entity type and description. These models generate confidence scores in representation learning to estimate the triplets, thereby enhancing the robustness of KG embedding with noise.…”
Section: Related Workmentioning
confidence: 99%
“…e noise issue can be roughly classified into two classes, that is, false relationships between entities (head entity, relationship, tail entity) and false entity type instances (entity, entity type). Most of the existing research concentrates on the deal with the noisy triple facts in KG [9,14,[14][15][16][17][18][19][20][21]. For example, Jiang et al [15] present a Markov logic-based system for cleaning an extracted knowledge base.…”
Section: Kg Noise Detectionmentioning
confidence: 99%
“…Xie et al [20] propose a confidence-aware knowledge representation learning framework that detects possible noises in KGs while learning knowledge representations with confidence simultaneously. Zhao et al [21] propose a trustiness-aware method for KG noise detection. Despite their success, which focuses on detecting triple fact noises, their goals are different from this paper.…”
Section: Kg Noise Detectionmentioning
confidence: 99%