2022
DOI: 10.1155/2022/4734179
|View full text |Cite
|
Sign up to set email alerts
|

Edge-Aware Graph Neural Network for Multi-Hop Path Reasoning over Knowledge Base

Abstract: Multi-hop path reasoning over knowledge base aims at finding answer entities for an input question by walking along a path of triples from graph structure data, which is a crucial branch in the knowledge base question answering (KBQA) research field. Previous studies rely on deep neural networks to simulate the way humans solve multi-hop questions, which do not consider the latent relation information contained in connected edges, and lack of measuring the correlation between specific relations and the input q… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 46 publications
0
1
0
Order By: Relevance
“…When noisy nodes are not discarded in a timely manner, the performance of the model in predicting answers will be negatively impacted [13,14]. On the other hand, in previous works [15,16], language models and knowledge graph models have existed as independent components, which resulted in a missing relationship between the question and graph entities. The limited interaction between language models and knowledge graph models is a major issue, which causes models to struggle with understanding complex questionknowledge relations [17][18][19].…”
Section: Introductionmentioning
confidence: 99%
“…When noisy nodes are not discarded in a timely manner, the performance of the model in predicting answers will be negatively impacted [13,14]. On the other hand, in previous works [15,16], language models and knowledge graph models have existed as independent components, which resulted in a missing relationship between the question and graph entities. The limited interaction between language models and knowledge graph models is a major issue, which causes models to struggle with understanding complex questionknowledge relations [17][18][19].…”
Section: Introductionmentioning
confidence: 99%